About Us Products Support News Where To Buy
About Us Company Milestones Investor Relations Careers Contact Policies
SuperServer® SuperBlade® Motherboards Chassis Accessories configurator AMD Solutions
Support Downloads Online Support Product Manuals RMA archive
Newsroom Events & Conferences News Press Releases
Asia / Middle East Pan Europe North America Central America South America Other Regions GSA
 
 
Add-on Card AOC-IBH-003
Super Micro Computer, Inc.
Contact Us

Search

Reseller Resource Center
  Products   SuperBlade®   Networking   [ AOC-IBH-003 ]
Add-on Card

Dual-Port, Low Latency InfiniBand Adapter Cards For SuperBlade

AOC-IBH-003 This InfiniBand mezzanine card for the SuperBlade delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. AOC-IBH-003 simplifies network deployment by consolidating clustering, communications, storage, and management I/O and by providing enhanced performance in virtualized server environments. In addition to this outstanding InfiniBand capability, the AOC-IBH-003 can be configured alternatively as 10-Gigabit Ethernet NIC when used with the Supermicro SBM-XEM-002 10-Gigabit Pass-Through module.
 
Compliance
RoHS
RoHS 5/6
 
Download
User's Guide See Appendix A of the SuperBlade Network Modules User's Manual for Installation Instructions [ Download ]
Firmware Firmware [ Download ]
Installation Instructions [ Download ]

This firmware will allow the AOC-IBH-003 to be either a 10 Gbps Ethernet NIC or a 20Gbps DDR InfiniBand NIC

Highlights
  • 1.12us InfiniBand latency (ib_write_bw)
  • Dual 20Gb/s InfiniBand ports or 10Gb/s Ethernet ports
  • CPU offload of transport operations
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • TCP/UDP/IP stateless offload
  • Full support for Intel I/OAT
  • Support both AMD and Intel platforms
 
Specification
  • InfiniBand:
     - Mellanox ConnectX IB DDR Chip
     - Dual 4X InfiniBand ports
     - 20Gb/s per port
     - RDMA, Send/Receive semantics
     - Hardware-based congestion
       control
     - Atomic operations
  • Interface:
     - SuperBlade Mezzanine Card
  • Connectivity:
     - Interoperable with InfiniBand
       switches through SuperBlade
       InfiniBand Switch (SBM-IBS-001)
     - Interoperable with 10 Gigabit
       Ethernet switches through
       SuperBlade 10G Ethernet Pass-
       Through Module (SBM-XEM-002)
  • Hardware-based I/O Virtualization:
     - Address translation and
       protection
     - Multiple queues per virtual
       machine
     - Native OS performance
     - Complimentary to Intel and AMD
       I/OMMU
  • CPU Offloads:
     - TCP/UDP/IP stateless offload
     - Intelligent interrupt coalescence
     - Full support for Intel I/OAT
     - Compliant to Microsoft RSS and
       NetDMA
  • Operating Systems/Distributions (InfiniBand):
     - Novell, RedHat, Fedora and
       others
     - Microsoft Windows Server
  • Operating Systems/Distributions (Ethernet):
     - RedHat Linux
  • Operating Conditions:
     - Operating temperature: 0 to 55°C
 
Compatible Servers
All Enterprise Blade Servers EXCEPT Tylersburg Blade Servers
Information in this document is subject to change without notice.
Other products and companies referred to herein are trademarks or registered trademarks of their respective companies or mark holders.