Add-on Card AOC-UINF-M2
  Products  Accessories   [ AOC-UINF-M2 ]
Add-on Card - Discontinued SKU (EOL). Please contact sales-rep for alternative options.


Dual-port, Low Latency IB UIO card with PCI-E Gen2 and Virtual Protocol Interconnect™ (VPI)



AOC-UINF-M2 InfiniBand card with Virtual Protocol Interconnect Interconnect™ (VPI) delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. AOC-UINF-M2 simplifies network deployment by consolidating clustering, communications, storage, and management I/O and by providing enhanced performance in virtualized server environments.
This product is only available through Supermicro.
 
Key Features
  • Virtual Protocol Interconnect™ (VPI)
  • 1.2us MPI ping latency
  • Dual 20Gb/s IB or dual 10GbE ports
  • PCI Express 2.0 (up to 5GT/s)
  • CPU offload of transport operations
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • TCP/UDP/IP stateless offload
  • Full support for Intel I/OAT
  • Support both AMD and Intel® platforms
  • Fibre Channel Encapsulation (FCoIB or FCoE)
 
Compliance
RoHS

  • RoHS Compliant 6/6, Pb Free
 
Downloads
User's Guide  [ Download ] (605 KB)
Datasheet [ Download ]
Driver Choose the latest CDR-NIC folder, and then click on Mellanox folder for driver [ Download ]
Driver CD Click on the latest CDR-NIC ISO file to download [ Download ]
Firmware [ Download ] (version 2.6)
 
Compatibility Matrices
Cables & Transceivers

Networking Cables & Transceivers
WIO/UIO Servers and Motherboards

AOC Compatibility Matrix


Specification
  • InfiniBand:

     - Mellanox ConnectX IB DDR MT25408A0-FCC-GI

     - Dual 4X InfiniBand ports

     - 20Gb/s per port

     - RDMA, Send/Receive semantics

     - Hardware-based congestion control

     - Atomic operations
  • Interface:

     - PCI Express 2.0 x8

     - UIO form factor
  • Connectivity:

     - Interoperable with InfiniBand switches

     - 10m+ (20Gb/s) of copper cable

     - External optical media adapter and active

        cable support
  • Hardware-based I/O Virtualization:

     - Single Root IOV

     - Address translation and protection

     - Multiple queues per virtual machine

     - VMware NetQueue support

     - Complimentary to Intel® and AMD I/OMMU

     - PCISIG IOV compliant
  • CPU Offloads:

     - TCP/UDP/IP stateless offload

     - Intelligent interrupt coalescence

     - Full support for Intel® I/OAT

     - Microsoft® RSS and NetDMA Compliant
  • Storage Support:

     - T10-compliant Data Integrity Field support

     - Fibre Channel over InfiniBand or Ethernet

  • Operating Systems/Distributions:

     - Novell SLES, RedHat, Fedora and others

     - Microsoft® Windows Server 2003/2008/CCS 2003

     - OpenFabrics Enterprise Distribution (OFED)

     - OpenFabrics Windows Distribution (WinOF)

     - VMWare ESX Server 3.5, Citrix XenServer 4.1
  • Operating Conditions:

     - Operating temperature: 0 to 55°C

     - Requires 3.3V, 12V power supplies
 
Compatible Cable
InfiniBand CX4 Copper 20Gb/s Cable (Not Included)
Parts List
  Part Number Qty Description
AOC-UINF-m2 AOC-UINF-m2 1 UIO 2-port InfiniBand DDR 20Gb/s Controller
Brackets MCP-240-00057-0N
MCP-240-00058-0N

1

1
Low-Profile End Bracket with Screws

Full-height End Bracket with Screws


Optional Parts List
  Part Number Qty Description
CX4 Cable CBL-0474L - 39.37" (100cm) CX4 to CX4 for blades switches
CX4 Cable CBL-0475L - 472.44" (1200cm) CX4 to CX4 for blades switches
Information in this document is subject to change without notice.

Other products and companies referred to herein are trademarks or registered trademarks of their respective companies or mark holders.