跳转到主要内容

Next Leap of AI Infrastructure is Here

Today’s advanced AI models are changing our lives rapidly. Accelerated compute infrastructure is evolving at unprecedented speed in all market segments. Flexible, Robust, and massively scalable infrastructure with next-generation GPUs is enabling a new chapter of AI.

In close partnership with NVIDIA, Supermicro delivers the broadest selection of NVIDIA-Certified systems providing the most performance and efficiency from small enterprises to massive, unified AI training clusters with the new NVIDIA H100 Tensor Core GPUs.

Together, we achieve up to nine times the training performance of the previous generation for some of the most challenging AI models, cutting a week of training time into just 20 hours. Supermicro systems with the new H100 PCI-E and HGX H100 GPUs, as well as the newly announced L40 GPU, bring PCI-E Gen5 connectivity, fourth-generation NVLink and NVLink Network for scale-out, and the new CNX cards empowering GPUDirect RDMA and Storage with NVIDIA Magnum IO and NVIDIA AI Enterprise software.

GPU System Portfolio
New NVIDIA H100 Systems
8U GPU System

Next-Gen 8U Universal GPU System (Coming Soon)

Suited for Today’s Largest Scale AI Training Models and HPC, Featuring Superior Thermal Capacity with Reduced Accoustics, More I/O, and Vast Storage

  • GPU: NVIDIA HGX H100 8-GPU (Codenamed Hopper)
  • GPU Featureset: With 80 billion transistors the H100 is the world’s most advanced chip ever built and delivers up to 9 times faster performance for AI training
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: Up to 24 Hot-Swap NVMe U.2
4U/5U Universal GPU System

Next-Gen 4U/5U Universal GPU System (Coming Soon)

Optimized for AI Inference workloads and use cases. Modular by design for ultimate flexibility.

  • GPU: NVIDIA HGX H100 4-GPU
  • GPU Featureset : H100 HGX is able to accelerate AI Inference by up to 30 times more performance over previous generation
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: Up to 8 Hot-Swap NVMe U.2 connect to PCI-E Switch or 10 Hot-Swap 2.5” SATA/SAS
4U 10-GPU System

Next-Gen 4U 10GPU PCI-E Gen 5 System (Coming Soon)

Flexible Design for AI and and Graphically-Intensive Workloads, Supports Up to 10 NVIDIA GPUs.

  • GPU: Up to 10 double-width PCI-E GPUs per node
  • GPU Featureset: The NVIDIA L40 PCI-E GPUs in this system are ideal for driving media and graphic workloads
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: 24 Hot-Swap Bays
Tower/4U 4GPU System

Next-Gen 4U 4GPU System (Coming Soon)

Optimized for 3D Metaverse collaboration, Data Scientists, and Content Creators. Available in both Rackmount and Workstation Form Factors

  • GPU: NVIDIA PCI-E H100 4-GPU
  • GPU Featureset: NVIDIA H100 GPUs are the world’s first accelerator with confidential computing capability, increasing confidence in secure collaboration
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: 8 Hot-Swap 3.5” drive bays, up to 8 NVMe drives, 2x M.2 (SATA or NVMe)
4U 8-GPU NVIDIA OVX Reference Design System

2nd Gen NVIDIA OVX Reference Design Systems

New Purpose-Built Next Generation System Optimized to Power Immersive, Photorealistic 3D Models, Simulations, and Digital Twins

  • GPU: 8x NVIDIA L40
  • GPU Featureset: NVIDIA Ada Lovelace architecture with fastest 3rd generation ray tracing graphics (RT Cores) and 4th generation Tensor Cores
  • CPU: Dual Processor
  • Memory: ECC DDR5 up to 4800MT/s
  • Network: 3x ConnectX®-7 SmartNICs for 100G/200G/400G networking
Supermicro Leads the Market with High-Performance A100 GPU Servers

Supermicro builds the most robust high-performance servers based on NVIDIA Ampere GPUs that have been used by leading enterprises leveraging large scale computer vision and natural language processing models. Supermicro supports a range of customer needs with optimized systems for the new HGX A100 8-GPU and HGX A100 4-GPU platforms. With the newest version of NVIDIA® NVLink and NVIDIA NVSwitch technologies, these servers can deliver up to 5 PetaFLOPS of AI performance in a single 4U system. Supermicro can also support NVIDIA’s Ampere GPU family in a range of PCI-E systems, with up to 10 GPUs in a 4U server.

AS -2124GQ-NART

2U NVIDIA HGX A100 4-GPU
4x A100 40G/80G SXM4 GPUs
NVIDIA NVLink
Dual 3rd Gen AMD EPYC™ Processors
32 DIMM slots
2x 2200W Redundant Platinum Level PSU

SYS-420GP-TNAR

4U NVIDIA HGX A100 8-GPU
8x A100 40G/80GB SXM4 GPUs
NVIDIA NVLink and NVSwitch
Dual 3rd Gen Intel® Xeon® Scalable Processors
32 DIMM Slots
4x 2200W Redundant Platinum Level PSU

AS -4124GO-NART

4U NVIDIA HGX A100 8-GPU
8x A100 40G/80GB SXM4 GPUs
NVIDIA NVLink and NVSwitch
Dual 3rd Gen AMD EPYC™ Processors
32 DIMM Slots
4x 2200W Redundant Platinum Level PSU

AS -2114GT-DNR

2U 2-Node GPU Server
Up to 3 NVIDIA Ampere GPUs per Node
PCI-E Gen 4
Single 3rd Gen AMD EPYC™ Processor per Node
8 DIMM Slots per Node
2x 2600W Redundant Titanium Level PSU

SYS-420GP-TNR

4U 10 GPU (PCI-E)
Up to 10 NVIDIA Ampere GPUs
PCI-E Gen 4
Dual 3rd Gen Intel® Xeon® Scalable Processors
32 DIMM slots
4x 2000W Redundant Titanium Level PSU

AS -4124GS-TNR

4U 8 GPU (PCI-E)
Up to 8 NVIDIA Ampere GPUs
PCI-E Gen 4
Dual 3rd Gen AMD EPYC™ Processors
32 DIMM slots
4x 2000W Redundant Titanium Level PSU

NVIDIA-Certified Systems by Supermicro

With the continued rollout of advanced applications and workloads, customers require manageable, secure, and scalable servers for their data centers. Supermicro's compelling lineup of high-performance servers supporting NVIDIA GPUs and DPUs includes a growing number of NVIDIA-Certified Systems, with many more currently undergoing the certification process. Each server/GPU configuration earns its own certification.

NVIDIA Certified Solutions logo
Supermicro ServerGPU Type
SYS-420GP-TNAR 4U HGX A100 8-GPU Server
  • NVIDIA HGX A100 8-GPU
AS -4124GO-NART 4U HGX A100 8-GPU Server
  • NVIDIA HGX A100 8-GPU
AS -2124GQ-NART 2U HGX A100 4-GPU Server
  • NVIDIA HGX A100 4-GPU
SYS-420GP-TNR 4U PCI-E 8-GPU Server
  • NVIDIA A100 PCI-E
  • NVIDIA A40
  • NVIDIA A30
  • NVIDIA A10
  • NVIDIA T4
AS -4124GS-TNR 4U PCI-E 8-GPU Server
  • NVIDIA A100 PCI-E
  • NVIDIA A40
  • NVIDIA A30
SYS-210GP-DNR 2U 2-Node PCI-E GPU Server
  • NVIDIA A100 PCI-E
  • NVIDIA A40
AS -2114GT-DNR 2U 2-Node PCI-E GPU Server
  • NVIDIA A100 PCI-E
  • NVIDIA A40
  • NVIDIA A30
  • NVIDIA A10
SYS-220GP-TNR 2U PCI-E 6-GPU Server
  • NVIDIA A100 PCI-E
  • NVIDIA A40
  • NVIDIA A30
  • NVIDIA A10
  • NVIDIA T4
SYS-120GQ-TNRT 1U PCI-E 4-GPU Server
  • NVIDIA A100 PCI-E
  • NVIDIA A40
  • NVIDIA A30
  • NVIDIA A10
  • NVIDIA T4
SYS-220U/620U 2U Ultra Server
  • NVIDIA A100 PCI-E
  • NVIDIA A40
  • NVIDIA A30
  • NVIDIA A10
  • NVIDIA T4
SYS-120U/610U 1U Ultra Server
  • NVIDIA A100 PCI-E
  • NVIDIA A40
  • NVIDIA A30
  • NVIDIA A10
  • NVIDIA T4
SYS-740GP-TNRT 4-GPU Workstation
  • NVIDIA A100 PCI-E
  • NVIDIA A40
  • NVIDIA A30
  • NVIDIA A10
  • NVIDIA T4
SBA-4119SG GPU Blade Server
  • NVIDIA A100 PCI-E
  • NVIDIA A40
  • NVIDIA A10
  • NVIDIA T4

Test Drive Supermicro’s NVIDIA HGX A100 System with Our Partners

Please register for the program by clicking on one of our partner logos.

(*Terms and Conditions apply. Please check the details with your preferred partner.)

Available Models
System List
Product GroupOrderSKUDescriptionLinkImageGenerationComingNewGlobalGPUGPU-GPUCPUCPU TypeDIMM SlotsDrive SizeDrivesNetworkingForm FactorTotal PCI-E Slots#Total PowerGPU TypeInterfaceRedundant PowerApplications