GPU Server Systems
Unrivaled GPU Systems: Deep Learning-Optimized Servers for the Modern Data Center
Universal GPU Systems
Modular Building Block Design, Future Proof Open-Standards Based Platform in 4U, 5U, or 8U for Large Scale AI training and HPC Applications
- GPU: NVIDIA HGX H100/A100 4-GPU/8-GPU, AMD Instinct MI250 OAM Accelerator
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 8TB
- Drives: Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
4U/5U GPU Lines with PCIe 5.0
Maximum Acceleration and Flexibility for AI/Deep Learning and HPC Applications
- GPU: Up to 10 NVIDIA H100 PCIe GPUs, or up to 10 double-width PCIe GPUs
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 8TB DRAM or 12TB DRAM + PMem
- Drives: Up to 24 Hot-swap 2.5" SATA/SAS/NVMe
NVIDIA MGX™ Systems
Modular Building Block Platform Supporting Today's and Future GPUs, CPUs, and DPUs
- GPU: Up to 4 NVIDIA PCIe GPUs including H100, H100 NVL, and L40S
- CPU: Intel® Xeon® or NVIDIA Grace Superchip
- Memory: Up to 16 DIMMs, 4TB DRAM or 960GB on-chip memory
- Drives: Up to 8 E1.S + 2 M.2 drives
4U GPU Lines with PCIe 4.0
Flexible Design for AI and Graphically Intensive Workloads, Supporting Up to 10 GPUs
- GPU: NVIDIA HGX A100 8-GPU with NVLink, or up to 10 double-width PCIe GPUs
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 8TB DRAM or 12TB DRAM + PMem
- Drives: Up to 24 Hot-swap 2.5" SATA/SAS/NVMe
2U 2-Node Multi-GPU with PCIe 4.0
Dense and Resource-saving Multi-GPU Architecture for Cloud-Scale Data Center Applications
- GPU: Up to 3 double-width PCIe GPUs per node
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 8 DIMMs, 2TB per node
- Drives: Up to 2 front hot-swap 2.5” U.2 per node
2U GPU Lines
High Performance and Balanced Solutions for Accelerated Computing Applications
- GPU: NVIDIA HGX A100 4-GPU with NVLink, or up to 6 double-width PCIe GPUs
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 8TB
- Drives: Up to 10 Hot-swap 2.5" SATA/SAS/NVMe
1U GPU Lines
Highest Density GPU Platforms for Deployments from the Data Center to the Edge
- GPU: Up to 4 double-width PCIe GPUs
- CPU: Intel® Xeon®
- Memory: Up to 16 DIMMs, 4TB
- Drives: Up to 6 x 2.5" Hot-swap SAS/SATA, or 2 x 2.5" Hot-swap NVMe SATA/SAS/NVMe
GPU Workstation
Flexible Solution for AI/Deep Learning Practitioners and High-end Graphics Professionals
- GPU: Up to 4 double-width PCIe GPUs
- CPU: Intel® Xeon®
- Memory: Up to 16 DIMMs, 6TB
- Drives: Up to 8 hot-swap 2.5” SATA/NVMe
Options For Accessing PCIe GPUs in a High Performance Server Architecture
Understanding Configuration Options for Supermicro GPU Servers Delivers Maximum Performance for Workloads
Petrobras Acquires Supermicro Servers Integrated by Atos To Reduce Costs and Increase Exploration Accuracy
Supermicro Systems Power Petrobras to the #33 Position in the Top500, November 2022 Rankings
Supermicro Servers Increase GPU Offerings For SEEWEB, Giving Demanding Customers Faster Results For AI and HPC Workloads
Seeweb Selects Supermicro GPU Servers to Meet Customer Demands of HPC and AI Workloads
H12 Universal GPU Server
Open Standards-Based Server Design for Architectural Flexibility
3000W AMD Epyc Server Tear-Down, ft. Wendell of Level1Techs
We are looking at a server optimized for AI and machine learning. Supermicro has done a lot of work to cram as much as possible into 2114GT-DNR (2U2N) - a density optimized server. This is a really cool construction: there are two systems in this 2U chassis. The two redundant power supplies are 2,600W each and we'll see why we need so much power. It hosts six AMD MI210 Instinct GPUs and the dual Epyc processors. See the level of engineering Supermicro put into the design of this server.
H12 2U 2-Node Multi-GPU
Multi-Node Design for Compute and GPU-Acceleration Density
NEC Advances AI Research With Advanced GPU Systems From Supermicro
NEC uses Supermicro GPU servers with NVIDIA® A100s for Building a Supercomputer for AI Research (In Japanese)
Hybrid 2U2N GPU Workstation-Server Platform Supermicro SYS-210GP-DNR Hands-on
Today we are finishing our latest series by taking a look at the Supermicro SYS-210GP-DNR, a 2U, 2-node 6 GPU system that Patrick recently got some hands-on time with at Supermicro headquarters.
Supermicro SYS-220GQ-TNAR+ a NVIDIA Redstone 2U Server
Today we are looking at the Supermicro SYS-220GQ-TNAR+ that Patrick recently got some hands-on time with at Supermicro headquarters.

Unveiling GPU System Design Leap - Supermicro SC21 TECHTalk with IDC
Presented by Josh Grossman, Principal Product Manager, Supermicro and Peter Rutten, Research Director, Infrastructure Systems, IDC

Supermicro TECHTalk: High-Density AI Training/Deep Learning Server
Our newest data center system packs the highest density of advanced NVIDIA Ampere GPUs with fast GPU-GPU interconnect and 3rd Gen Intel® Xeon® Scalable processors. In this TECHTalk, we will show how we enable unparalleled AI performance in a 4U rack height package.
Mission Critical Server Solutions
Maximizing AI Development & Delivery with Virtualized NVIDIA A100 GPUs
Supermicro systems with NVIDIA HGX A100 offer a flexible set of solutions to support NVIDIA Virtual Compute Server (vCS) and NVIDIA A100 GPUs, enabling AI developments and delivery to run small and large AI models.

Supermicro SuperMinute: 2U 2-Node Server
Supermicro's breakthrough multi-node GPU/CPU platform is unlike any existing product in the market. With our advanced Building Block Solutions® design and resource-saving architecture, this system leverages the most advanced CPU and GPU engines along with advanced high-density storage in a space-saving form factor, delivering unrivaled energy-efficiency and flexibility.

SuperMinute: 4U System with HGX A100 8-GPU
For the most demanding AI workloads, Supermicro builds the highest-performance, fastest-to-market servers based on NVIDIA A100™ Tensor Core GPUs. With the newest version of NVIDIA® NVLink™ and NVIDIA NVSwitch™ technologies, these servers can deliver up to 5 PetaFLOPS of AI performance in a single 4U system.

SuperMinute: 2U System with HGX A100 4-GPU
The new AS -2124GQ-NART server features the power of NVIDIA A100 Tensor Core GPUs and the HGX A100 4-GPU baseboard. The system supports PCI-E Gen 4 for fast CPU-GPU connection and high-speed networking expansion cards.
High Performance GPU Accelerated Virtual Desktop Infrastructure Solutions with Supermicro Ultra SuperServers
1U 4 GPU Server White Paper
Product Group | Hidden | Order | SKU | Description | Link | Image | Generation | Coming | New | Global | GPU | GPU-GPU | CPU | CPU Type | DIMM Slots | Drive Size | Drives | Networking | Form Factor | Total PCI-E Slots# | Total Power | GPU Type | Interface | Redundant Power | Applications |
---|