Skip to main content

Next Leap of AI Infrastructure is Here

Today’s advanced AI models are changing our lives rapidly. Accelerated compute infrastructure is evolving at unprecedented speed in all market segments. Flexible, robust, and massively-scalable infrastructure with next-generation GPUs is enabling a new chapter of AI.

In close partnership with NVIDIA, Supermicro delivers one of the broadest selections of NVIDIA-Certified systems providing the most performance and efficiency from small enterprises to massive, unified AI training clusters with the new NVIDIA H100 and H200 Tensor Core GPUs.

Together, we achieve up to nine times the training performance of the previous generation for some of the most challenging AI models, cutting a week of training time into just 20 hours. Supermicro systems with the H100 PCIe, HGX H100 GPUs, as well as the newly announced HGX H200 GPUs, bring PCIe 5.0 connectivity, fourth-generation NVLink and NVLink Network for scale-out, and the new NVIDIA ConnectX®-7 and BlueField®-3 cards empowering GPUDirect RDMA and Storage with NVIDIA Magnum IO and NVIDIA AI Enterprise software.

GPU System Portfolio
8U GPU System

8U Universal GPU System

Suited for Today’s Largest-Scale AI Training Models and HPC, Featuring Superior Thermal Capacity with Reduced Accoustics, More I/O, and Vast Storage

  • GPU: NVIDIA HGX H100 8-GPU and HGX H200 8-GPU
  • GPU Advantage: With 80 billion transistors, the H100 and H200 are the world’s most advanced chip ever built and delivers 5X faster training time than A100 for LLMs and up to 110X faster time results for HPC applications
  • GPU-GPU Interconnect: 4th Gen NVLink® at 900GB/s
  • CPU: Dual Processor, Intel or AMD
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: Up to 24 Hot-Swap NVMe U.2
4U Universal GPU System

4U Universal GPU System

Optimized for HPC and Advanced Enterprise AI Workloads and Use Cases. Modular by Design for Ultimate Flexibility

  • GPU: NVIDIA HGX H100 4-GPU or 8-GPU (liquid-cooled), and HGX H200 4-GPU or 8-GPU (liquid-cooled)
  • GPU Advantage : HGX H200 doubles LLM inference performance and speeds up LLM fine-tuning by 5.5X
  • GPU-GPU Interconnect: 4th Gen NVLink® at 900GB/s
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: Up to 8 Hot-Swap NVMe U.2 connect to PCIe Switch or 10 Hot-Swap 2.5" SATA/SAS
4U 10-GPU System

4U or 5U 10GPU PCIe Gen 5 System

Flexible Design for AI and Graphically-Intensive Workloads, Supporting Up to 10 NVIDIA GPUs.

  • GPU: Up to 10 double-width PCIe GPUs
  • GPU Advantage: The NVIDIA L40S PCIe GPUs in this system are ideal for generative AI platform for high-quality images and immersive visual content
  • GPU-GPU Interconnect: Optional NVLink Bridge at 600GB/s
  • CPU: Dual Processors, Intel or AMD
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: 24 Hot-Swap Bays
  • Click here to register for 5 years of NVIDIA AI Enterprise Software included with each NVIDIA H100 PCIe GPU purchase
Tower/4U 4GPU System

4U 4GPU System

Optimized for 3D Metaverse Collaboration, Data Scientists, and Content Creators. Available in both Rackmount and Workstation Form Factors

  • GPU: Up to 4 double-width PCIe GPUs
  • GPU Advantage: NVIDIA L40S GPUs deliver up to 2x the real-time ray-tracing performance compared to the previous generation, making them ideal for creating beautifully-detailed, photorealistic models and scenes
  • GPU-GPU Interconnect: Optional NVLink Bridge at 600GB/s
  • CPU: Dual Processors, Intel or AMD
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: 8 Hot-Swap 3.5" drive bays, up to 8 NVMe drives, 2x M.2 (SATA or NVMe)
  • Click here to register for 5 years of NVIDIA AI Enterprise Software included with each NVIDIA H100 PCIe GPU purchase

Supermicro’s NVIDIA MGX™ Systems

2U NVIDIA MGX System with Intel- or ARM-based Processor

NVIDIA MGX Systems

Infinite Possibilities in a Modular Building Block Platform Supporting Today’s and Future GPUs, CPUs, and DPUs

  • GPU: Up to 4 NVIDIA double-width PCIe GPUs, including H100 PCIe, H100 NVL PCIe, L40S, and more
  • NVIDIA MGX Reference Design: Enabling to construct a wide array of platforms supporting both Arm®- and x86-based servers and is compatible with current and future generations of GPUs, CPUs, and DPUs
  • CPU: NVIDIA GH200 Grace Hopper Superchip, NVIDIA Grace CPU Superchip, or 4th Gen Intel® Xeon® Scalable processor
  • Memory: Up to 480GB integrated LPDDR5X DRAM (with Grace Hopper or Grace CPU Superchip) or up to 2TB 4800MT/s ECC DDR5 DRAM (with Intel CPU)
  • Drives: Up to 8 Hot-Swap E1.S NVMe
  • Networking: Supports NVIDIA BlueField®-3 DPU or NVIDIA ConnectX®-7
4U 8-GPU NVIDIA OVX Reference Design System

2nd Gen NVIDIA OVX Reference Design Systems

New Purpose-Built Next Generation System Optimized to Power Immersive, Photorealistic 3D Models, Simulations, and Digital Twins

  • GPU: 8x NVIDIA L40S or L40
  • GPU Featureset: NVIDIA Ada Lovelace architecture with fastest 3rd generation ray tracing graphics (RT Cores) and 4th generation Tensor Cores
  • CPU: Dual Processor
  • Memory: ECC DDR5 up to 4800MT/s
  • Network: 3x ConnectX®-7 SmartNICs for 100G/200G networking

Supermicro Leads the Market with High-Performance A100 GPU Servers

Supermicro builds the most robust high-performance servers based on NVIDIA Ampere GPUs that have been used by leading enterprises leveraging large scale computer vision and natural language processing models. Supermicro supports a range of customer needs with optimized systems for the new HGX A100 8-GPU and HGX A100 4-GPU platforms. With the newest version of NVIDIA® NVLink and NVIDIA NVSwitch technologies, these servers can deliver up to 5 PetaFLOPS of AI performance in a single 4U system. Supermicro can also support NVIDIA’s Ampere GPU family in a range of PCIe systems, with up to 10 GPUs in a 4U server.

AS -2124GQ-NART

2U NVIDIA HGX A100 4-GPU
4x A100 40G/80G SXM4 GPUs
NVIDIA NVLink
Dual 3rd Gen AMD EPYC™ Processors
32 DIMM slots
2x 2200W Redundant Platinum Level PSU

SYS-420GP-TNAR

4U NVIDIA HGX A100 8-GPU
8x A100 40G/80GB SXM4 GPUs
NVIDIA NVLink and NVSwitch
Dual 3rd Gen Intel® Xeon® Scalable Processors
32 DIMM Slots
4x 2200W Redundant Platinum Level PSU

AS -4124GO-NART

4U NVIDIA HGX A100 8-GPU
8x A100 40G/80GB SXM4 GPUs
NVIDIA NVLink and NVSwitch
Dual 3rd Gen AMD EPYC™ Processors
32 DIMM Slots
4x 2200W Redundant Platinum Level PSU

AS -2114GT-DNR

2U 2-Node GPU Server
Up to 3 NVIDIA Ampere GPUs per Node
PCIe Gen 4
Single 3rd Gen AMD EPYC™ Processor per Node
8 DIMM Slots per Node
2x 2600W Redundant Titanium Level PSU

SYS-420GP-TNR

4U 10 GPU (PCIe)
Up to 10 NVIDIA Ampere GPUs
PCIe Gen 4
Dual 3rd Gen Intel® Xeon® Scalable Processors
32 DIMM slots
4x 2000W Redundant Titanium Level PSU

AS -4124GS-TNR

4U 8 GPU (PCIe)
Up to 8 NVIDIA Ampere GPUs
PCIe Gen 4
Dual 3rd Gen AMD EPYC™ Processors
32 DIMM slots
4x 2000W Redundant Titanium Level PSU

NVIDIA-Certified Systems by Supermicro

With the continued rollout of advanced applications and workloads, customers require manageable, secure, and scalable servers for their data centers. Supermicro's compelling lineup of high-performance servers supporting NVIDIA GPUs and DPUs includes a growing number of NVIDIA-Certified Systems, with many more currently undergoing the certification process. Each server/GPU configuration earns its own certification.

NVIDIA Certified Solutions logo

Learn More

Available Models

Card View List View
{"subgroup_0_1":["SYS-821GE-TNHR||5"],"subgroup_0_2":["AS -8125GS-TNMR2||7"],"subgroup_0_3":["SYS-821GV-TNR||17"],"subgroup_0_4":["AS -8125GS-TNHR||21"]}
8U Universal GPU Systems1
Loading Data
{"subgroup_1_1":["SYS-421GU-TNXR||2"],"subgroup_1_2":["SYS-420GU-TNXR (IN 5U)||34","SYS-420GU-TNXR (in 4U)||33"],"subgroup_1_4":["AS -4124GQ-TNMI (IN 5U)||44","AS -4124GQ-TNMI (in 4U)||43"]}
4U/5U Universal GPU systems2
Loading Data
{"subgroup_2_1":["SYS-221GE-TNHT-LCC||13"],"subgroup_2_2":["SYS-421GE-TNHR2-LCC||19"]}
Liquid-Cooled Universal GPU Systems2
Loading Data
{"subgroup_3_1":["SYS-4029GP-TVRT||58"]}
4U GPU with NVLink2
Loading Data
{"subgroup_4_1":["SYS-421GE-TNRT3||6","SYS-421GE-TNRT||1"],"subgroup_4_2":["SYS-521GE-TNRT||3"],"subgroup_4_4":["AS -4125GS-TNRT1||22","AS -4125GS-TNRT2||23","AS -4125GS-TNRT||20"]}
4U/5U GPU Lines with PCIe 5.03
Loading Data
{"subgroup_5_1":["SYS-4029GP-TRT2||56","SYS-4029GP-TRT3||57","SYS-4029GP-TRT||45"],"subgroup_5_2":["SYS-6049GP-TRT||52"]}
4U GPU Lines3
Loading Data
{"subgroup_6_1":["SYS-221GE-NR||0"],"subgroup_6_2":["ARS-121L-DNR||8"],"subgroup_6_3":["ARS-221GL-NHIR||16","ARS-221GL-NR||9"],"subgroup_6_4":["ARS-111GL-NHR||10"],"subgroup_6_5":["ARS-111GL-NHR-LCC||11"],"subgroup_6_6":["ARS-111GL-DNHR-LCC||12"]}
NVIDIA MGX™ Systems4
Loading Data
{"subgroup_7_1":["SYS-420GP-TNAR+||31","SYS-420GP-TNAR||29"],"subgroup_7_3":["AS -4124GO-NART+||42","AS -4124GO-NART||39"]}
4U GPU with NVLink and PCIe 4.04
Loading Data
{"subgroup_8_1":["AS -2145GH-TNMR||14"],"subgroup_8_2":["AS -4145GH-TNMR||15"]}
AMD APU Systems5
Loading Data
{"subgroup_9_1":["SYS-420GP-TNR||24"],"subgroup_9_2":["AS -4124GS-TNR+||40","AS -4124GS-TNR||36"]}
4U GPU Lines with PCIe 4.06
Loading Data
{"subgroup_10_1":["SYS-210GP-DNR||26"],"subgroup_10_2":["AS -2114GT-DPNR||37","AS -2114GT-DNR||35"]}
2U 2-Node Multi-GPU with PCIe 4.07
Loading Data
{"subgroup_11_1":["SYS-220GP-TNR||27"],"subgroup_11_2":["SYS-220GQ-TNAR+||30"],"subgroup_11_3":["AS -2124GQ-NART+||41","AS -2124GQ-NART||38"],"subgroup_11_5":["SYS-2029GP-TR||49"]}
2U GPU Lines8
Loading Data
{"subgroup_12_1":["SYS-120GQ-TNRT||28"],"subgroup_12_2":["SYS-1019GP-TT||47"],"subgroup_12_3":["SYS-1029GP-TR||48"],"subgroup_12_4":["SYS-5019GP-TT||50"],"subgroup_12_5":["SYS-1029GQ-TNRT||53","SYS-1029GQ-TRT||51"]}
1U GPU Lines9
Loading Data
{"subgroup_13_1":["SYS-741GE-TNRT||4"],"subgroup_13_2":["SYS-751GE-TNRT-NV1||18"],"subgroup_13_3":["SYS-740GP-TNRBT||32","SYS-740GP-TNRT||25"],"subgroup_13_5":["SYS-7049GP-TRT||46"]}
GPU Workstation10
Loading Data
{"subgroup_14_1":["SYS-1029GQ-TXRT||55","SYS-1029GQ-TVRT||54"]}
1U GPU with NVLink13
Loading Data
{"subgroup_15_1":["SYS-9029GP-TNVRT||59"]}
10U 16-GPU with NVLink14
Loading Data
System List
Product GroupHiddenOrderSKUDescriptionLinkImageGenerationComingNewGlobalGPUGPU-GPUCPUCPU TypeDIMM SlotsDrive SizeDrivesNetworkingForm FactorTotal PCI-E Slots#Total PowerGPU TypeInterfaceRedundant PowerApplicationsBuy Now