GPU Systems
Best GPU Servers for Modern Data Centers. The Most Comprehensive AI Systems Featuring the Latest Multi-GPU and Interconnect Technologies
通用 GPU 系统
模块化构建块设计,面向未来的基于开放标准的 4U、5U 或 8U 平台,适用于大规模 AI 训练和 HPC 应用
- GPU: NVIDIA HGX H100/A100 4-GPU/8-GPU, AMD Instinct MI250 OAM Accelerator
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 8TB
- Drives: Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
4U GPU 线
人工智能/深度学习和 HPC 应用程序的最大加速和灵活性
- GPU: Up to 10 NVIDIA H100 PCIe GPUs, or up to 10 double-width PCIe GPUs
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 8TB DRAM or 12TB DRAM + PMem
- Drives: Up to 24 Hot-swap 2.5" SATA/SAS/NVMe
NVIDIA MGX™ Systems
Modular Building Block Platform Supporting Today's and Future GPUs, CPUs, and DPUs
- GPU: Up to 4 NVIDIA PCIe GPUs including H100, H100 NVL, and L40S
- CPU: NVIDIA GH200 Grace Hopper™ Superchip, Grace™ CPU Superchip, or Intel® Xeon®
- Memory: Up to 960GB ingegrated LPDDR5X memory (Grace Hopper or Grace CPU Superchip) or 16 DIMMs, 4TB DRAM (Intel)
- Drives: Up to 8 E1.S + 4 M.2 drives
4U GPU Lines with PCIe 4.0
Flexible Design for AI and Graphically Intensive Workloads, Supporting Up to 10 GPUs
- GPU: NVIDIA HGX A100 8-GPU with NVLink, or up to 10 double-width PCIe GPUs
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 8TB DRAM or 12TB DRAM + PMem
- Drives: Up to 24 Hot-swap 2.5" SATA/SAS/NVMe
2U 2-Node Multi-GPU with PCIe 4.0
Dense and Resource-saving Multi-GPU Architecture for Cloud-Scale Data Center Applications
- GPU: Up to 3 double-width PCIe GPUs per node
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 8 DIMMs, 2TB per node
- Drives: Up to 2 front hot-swap 2.5” U.2 per node
2U GPU 线
用于加速计算应用的高性能和平衡解决方案
- GPU: NVIDIA HGX A100 4-GPU with NVLink, or up to 6 double-width PCIe GPUs
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 8TB
- Drives: Up to 10 Hot-swap 2.5" SATA/SAS/NVMe
1U GPU Lines
Highest Density GPU Platforms for Deployments from the Data Center to the Edge
- GPU: Up to 4 double-width PCIe GPUs
- CPU: Intel® Xeon®
- Memory: Up to 16 DIMMs, 4TB
- Drives: Up to 6 x 2.5" Hot-swap SAS/SATA, or 2 x 2.5" Hot-swap NVMe SATA/SAS/NVMe
GPU 工作站
面向 AI/深度学习从业者和高端图形专业人士的灵活解决方案
- GPU: Up to 4 double-width PCIe GPUs
- CPU: Intel® Xeon®
- Memory: Up to 16 DIMMs, 6TB
- Drives: Up to 8 hot-swap 2.5” SATA/NVMe
A Look at the Liquid Cooled Supermicro SYS-821GE-TNHR 8x NVIDIA H100 AI Server
Today we wanted to take a look at the liquid cooled Supermicro SYS-821GE-TNHR server. This is Supermicro’s 8x NVIDIA H100 system with a twist: it is liquid cooled for lower cooling costs and power consumption. Since we had the photos, we figured we would put this into a piece.
Options For Accessing PCIe GPUs in a High Performance Server Architecture
Understanding Configuration Options for Supermicro GPU Servers Delivers Maximum Performance for Workloads
Petrobras Acquires Supermicro Servers Integrated by Atos To Reduce Costs and Increase Exploration Accuracy
Supermicro Systems Power Petrobras to the #33 Position in the Top500, November 2022 Rankings
Supermicro Servers Increase GPU Offerings For SEEWEB, Giving Demanding Customers Faster Results For AI and HPC Workloads
Seeweb Selects Supermicro GPU Servers to Meet Customer Demands of HPC and AI Workloads
H12 Universal GPU Server
Open Standards-Based Server Design for Architectural Flexibility
3000W AMD Epyc Server Tear-Down, ft. Wendell of Level1Techs
We are looking at a server optimized for AI and machine learning. Supermicro has done a lot of work to cram as much as possible into 2114GT-DNR (2U2N) - a density optimized server. This is a really cool construction: there are two systems in this 2U chassis. The two redundant power supplies are 2,600W each and we'll see why we need so much power. It hosts six AMD MI210 Instinct GPUs and the dual Epyc processors. See the level of engineering Supermicro put into the design of this server.
H12 2U 2-Node Multi-GPU
Multi-Node Design for Compute and GPU-Acceleration Density
NEC Advances AI Research With Advanced GPU Systems From Supermicro
NEC uses Supermicro GPU servers with NVIDIA® A100s for Building a Supercomputer for AI Research (In Japanese)
Hybrid 2U2N GPU Workstation-Server Platform Supermicro SYS-210GP-DNR Hands-on
Today we are finishing our latest series by taking a look at the Supermicro SYS-210GP-DNR, a 2U, 2-node 6 GPU system that Patrick recently got some hands-on time with at Supermicro headquarters.
Supermicro SYS-220GQ-TNAR+ a NVIDIA Redstone 2U Server
Today we are looking at the Supermicro SYS-220GQ-TNAR+ that Patrick recently got some hands-on time with at Supermicro headquarters.

Unveiling GPU System Design Leap - Supermicro SC21 TECHTalk with IDC
Presented by Josh Grossman, Principal Product Manager, Supermicro and Peter Rutten, Research Director, Infrastructure Systems, IDC

Supermicro TECHTalk: High-Density AI Training/Deep Learning Server
Our newest data center system packs the highest density of advanced NVIDIA Ampere GPUs with fast GPU-GPU interconnect and 3rd Gen Intel® Xeon® Scalable processors. In this TECHTalk, we will show how we enable unparalleled AI performance in a 4U rack height package.
Mission Critical Server Solutions
Maximizing AI Development & Delivery with Virtualized NVIDIA A100 GPUs
Supermicro systems with NVIDIA HGX A100 offer a flexible set of solutions to support NVIDIA Virtual Compute Server (vCS) and NVIDIA A100 GPUs, enabling AI developments and delivery to run small and large AI models.

Supermicro SuperMinute: 2U 2-Node Server
Supermicro's breakthrough multi-node GPU/CPU platform is unlike any existing product in the market. With our advanced Building Block Solutions® design and resource-saving architecture, this system leverages the most advanced CPU and GPU engines along with advanced high-density storage in a space-saving form factor, delivering unrivaled energy-efficiency and flexibility.

SuperMinute: 4U System with HGX A100 8-GPU
For the most demanding AI workloads, Supermicro builds the highest-performance, fastest-to-market servers based on NVIDIA A100™ Tensor Core GPUs. With the newest version of NVIDIA® NVLink™ and NVIDIA NVSwitch™ technologies, these servers can deliver up to 5 PetaFLOPS of AI performance in a single 4U system.

SuperMinute: 2U System with HGX A100 4-GPU
The new AS -2124GQ-NART server features the power of NVIDIA A100 Tensor Core GPUs and the HGX A100 4-GPU baseboard. The system supports PCI-E Gen 4 for fast CPU-GPU connection and high-speed networking expansion cards.
高性能GPU加速虚拟桌面基础设施解决方案采用Supermicro Ultra SuperServer
型号
Product Group | Hidden | Order | SKU | Description | Link | Image | Generation | Coming | New | Global | GPU | GPU-GPU | CPU | CPU Type | DIMM Slots | Drive Size | Drives | Networking | Form Factor | Total PCI-E Slots# | Total Power | GPU Type | Interface | Redundant Power | Applications |
---|