Supermicro has teamed up with Linus Sebastian from Linus Tech Tips to unveil the new portfolio of data center servers and technology innovations, powered by AMD EPYC™ 9004 Series Processors (codename: Genoa). Watch as Linus takes a deep dive into how our new systems are designed to make your data center infrastructure better, faster, and greener.
Featuring
H13 GPU Optimized Systems
The Supermicro 4U GPU servers are 4U dual processor systems, supporting up to 10 FHFL double-width PCIe GPU cards, including the latest AMD Instinct MI200 Series and NVIDIA H100 GPUs. The 4U GPU-optimized systems provide maximum acceleration, flexibility, and balance for AI, deep learning, and HPC applications.
H13 Multi-Node GrandTwin™ Systems
The Supermicro GrandTwin™ are the newest multi-node architecture solution with Front and Rear I/O, designed for maximum density and purpose-built for single-processor performance per node. The flexible modular design is optimized for a range of applications, with a front I/O option simplifying installation for space-constrained environments.
H13 Hyper Systems
The Supermicro Hyper solutions are enterprise-focused servers built with versatility and performance. Uncompromised performance design with dual processors and 12 channel 24 DIMMs optimized for supporting the highest TDPs and offering a flexible range of computing, networking, storage, and I/O expansion capabilities.
H13 CloudDC Systems
The Supermicro CloudDC are single-socket servers optimized for I/O flexibility with many cloud-focused applications. Offering convenient serviceability with tool-less brackets, hot-swap drive trays, and redundant power supplies ensure rapid deployment and efficient maintenance in data centers.
H13 8U Universal GPU System
Propel AI/ML workloads with dual 4th Gen AMD EPYC processors and the latest NVIDIA HGX H100 8-GPU. The 8U Universal GPU System provides unparalleled I/O and thermal capacity. It supports 8 of 700W TDP GPUs with NVLink, GPUDirect Storage and RDMA with 8 of 400G networking to enable 1:1 GPU ratio to keep feeding deep learning models at massive scale.