About Us Products Solutions Support News Where To Buy
About Us Company Milestones Investor Relations Careers Contact Policies Sitemap
SuperServer® SuperBlade® MicroBlade Motherboards Chassis Accessories AMD Solutions
RSD Data Management Virtualization® Cloud® Software Defined
      Storage
Big Data
Support Downloads Online Support Onsite Services Manual QRG RMA Warranty
Newsroom Events & Conferences News View Video Press Releases
Asia / Middle East Pan Europe North America Central America South America Other Regions GSA
 
 
Add-on Card AOC-IBH-XQD
Super Micro Computer, Inc.
Contact Us

Search

MySupermicro
  Products   SuperBlade®   Networking   [ AOC-IBH-XQD ]
Add-on Card

Dual Port, Low Latency InfiniBand Adapter Cards For SuperBlade

AOC-IBH-XQD This InfiniBand mezzanine card for the SuperBlade delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. AOC-IBH-XQD simplifies network deployment by consolidating clustering, communications, storage, and management I/O and by providing enhanced performance in virtualized server environments. In addition to this outstanding InfiniBand capability, the AOC-IBH-XQD can be configured alternatively as a 10-Gigabit Ethernet NIC when used with the Supermicro SBM-XEM-002 10-Gigabit Pass-Through module or the SBM-XEM-X10SM 10 Gbps Ethernet switch.
 
Compliance
RoHS
RoHS 6/6
 
Download
User's Guide See Appendix A of the SuperBlade Network Modules User's Manual for Installation Instructions [ Download ]
Firmware Firmware [ Download ]

 
Highlights
  • Dual 40Gb/s InfiniBand port or 10Gb/s Ethernet ports
  • CPU offload of transport operations
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • TCP/UDP/IP stateless offload
  • Full support for Intel I/OAT
 
Specification
  • InfiniBand:
     - Mellanox ConnectX2 IB QDR Chip
     - Dual 4X InfiniBand ports
     - 40Gb/s
     - RDMA, Send/Receive semantics
     - Hardware-based congestion
       control
     - Atomic operations
  • Interface:
     - SuperBlade Mezzanine Card
  • Connectivity:
     - Interoperable with InfiniBand
       switches through SuperBlade
       QDR InfiniBand Switches
       (SBM-IBS-Q3618/M/SBM-IBS/Q3616/M)
     - Interoperable with 10 Gigabit
       Ethernet switches through
       SuperBlade 10Gbps Ethernet
       Pass-through Module
       (SBM-XEM-002) or
       SuperBlade 10G Ethernet Switch
        (SBM-XEM-X10SM)
  • Hardware-based I/O Virtualization:
     - Address translation and
       protection
     - Multiple queues per virtual
       machine
     - Native OS performance
     - Complimentary to Intel and AMD
       I/OMMU
  • CPU Offloads:
     - TCP/UDP/IP stateless offload
     - Intelligent interrupt coalescence
     - Full support for Intel I/OAT
     - Compliant to Microsoft RSS and
       NetDMA
  • Storage Support:
     - TIO compliant data integrity field support
     - Fibre Channel over InfiniBand or
       Fibre Channel over Ethernet
  • Operating Systems/Distributions (InfiniBand):
     - Novell, RedHat, Fedora and
       others
     - Microsoft Windows Server
  • Operating Systems/Distributions (Ethernet):
     - RedHat Linux
  • Operating Conditions:
     - Operating temperature: 0 to 55°C
 
Compatible Servers
  • Intel® TwinBlade® (SBI-7226T-T2)
  • Intel® (SBI-7126T-S6, SBI-7126T-T1E)
  • AMD Quad blade (SBA-7141A-T)
Information in this document is subject to change without notice.
Other products and companies referred to herein are trademarks or registered trademarks of their respective companies or mark holders.