About Us Products Support News Where To Buy
About Us Company Milestones Investor Relations Careers Contact Policies
SuperServer® SuperBlade® Motherboards Chassis Accessories configurator AMD Solutions
Support Downloads Online Support Product Manuals RMA archive
Newsroom Events & Conferences News Press Releases
Asia / Middle East Pan Europe North America Central America South America Other Regions GSA
Add-on Card AOC-IBH-003
Super Micro Computer, Inc.
Contact Us


Reseller Resource Center
  Products   SuperBlade®   Networking   [ AOC-IBH-003 ]
Add-on Card - Discontinued SKU. Please contact sales-rep for possible OEM production quantities. MOQ may apply

Dual-Port, Low Latency InfiniBand Adapter Cards For SuperBlade

AOC-IBH-003 This InfiniBand mezzanine card for the SuperBlade delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. AOC-IBH-003 simplifies network deployment by consolidating clustering, communications, storage, and management I/O and by providing enhanced performance in virtualized server environments. In addition to this outstanding InfiniBand capability, the AOC-IBH-003 can be configured alternatively as 10-Gigabit Ethernet NIC when used with the Supermicro SBM-XEM-002 10-Gigabit Pass-Through module.
RoHS 5/6
User's Guide See Appendix A of the SuperBlade Network Modules User's Manual for Installation Instructions [ Download ]
Firmware Firmware [ Download ]
Installation Instructions [ Download ]

This firmware will allow the AOC-IBH-003 to be either a 10 Gbps Ethernet NIC or a 20Gbps DDR InfiniBand NIC

  • 1.12us InfiniBand latency (ib_write_bw)
  • Dual 20Gb/s InfiniBand ports or 10Gb/s Ethernet ports
  • CPU offload of transport operations
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • TCP/UDP/IP stateless offload
  • Full support for Intel I/OAT
  • Support both AMD and Intel platforms
  • InfiniBand:
     - Mellanox ConnectX IB DDR Chip
     - Dual 4X InfiniBand ports
     - 20Gb/s per port
     - RDMA, Send/Receive semantics
     - Hardware-based congestion
     - Atomic operations
  • Interface:
     - SuperBlade Mezzanine Card
  • Connectivity:
     - Interoperable with InfiniBand
       switches through SuperBlade
       InfiniBand Switch (SBM-IBS-001)
     - Interoperable with 10 Gigabit
       Ethernet switches through
       SuperBlade 10G Ethernet Pass-
       Through Module (SBM-XEM-002)
  • Hardware-based I/O Virtualization:
     - Address translation and
     - Multiple queues per virtual
     - Native OS performance
     - Complimentary to Intel and AMD
  • CPU Offloads:
     - TCP/UDP/IP stateless offload
     - Intelligent interrupt coalescence
     - Full support for Intel I/OAT
     - Compliant to Microsoft RSS and
  • Storage Support:
     - TIO compliant data integrity field support
     - Fibre Channel over InfiniBand or
       Fibre Channel over Ethernet
  • Operating Systems/Distributions (InfiniBand):
     - Novell, RedHat, Fedora and
     - Microsoft Windows Server
  • Operating Systems/Distributions (Ethernet):
     - RedHat Linux
  • Operating Conditions:
     - Operating temperature: 0 to 55°C
Compatible Servers
All Enterprise Blade Servers EXCEPT Tylersburg Blade Servers
Information in this document is subject to change without notice.
Other products and companies referred to herein are trademarks or registered trademarks of their respective companies or mark holders.