Skip to main content

Supermicro Boosts Performance Up to 20x on Data Science, HPC and AI Workloads with Support for NVIDIA A100 PCIe GPUs on Over a Dozen Different GPU Servers

New Supermicro Servers with PCIe Gen 4 Feature Fully Optimized Support for the New NVIDIA A100 GPUs, Third Generation NVIDIA NVLink, and NVIDIA NVSwitch to Deliver Maximum Acceleration for Training, Inference, HPC, and Analytics

ISC Digital, June 22, 2020 — Super Micro Computer, Inc. (Nasdaq: SMCI), a global leader in enterprise computing, storage, networking solutions and green computing technology, today announced full support for the new NVIDIA A100™ PCIe GPUs across the company’s broad portfolio of 1U, 2U, and 4U GPU servers.

Supermicro first announced support for NVIDIA HGX™ A100 configurations in May with its new high-density 2U and 4U servers based on the four- and eight-way NVIDIA HGX A100 boards. With A100 GPUs now also available in PCIe form factor, customers can expect a major performance boost across Supermicro’s extensive portfolio of multi-GPU servers when they are equipped with the new NVIDIA A100 GPUs.

“Expanding our industry-leading portfolio of GPU systems with full support for the new NVIDIA A100 PCIe GPUs means customers can choose highest performing and most optimized server based on their specific applications,” said Charles Liang, CEO and president of Supermicro. “Designed to accelerate a vast range of compute-intensive applications, our new systems with PCIe Gen 4 deliver fully optimized support for the new NVIDIA A100 to boost performance up to 20x on some accelerated workloads.”

Supermicro’s new 4U A+ GPU system supports up to eight NVIDIA A100 PCIe GPUs via direct-attach PCI-E 4.0 x16 CPU-to-GPU lanes without any PCI-E switch for the lowest latency and highest bandwidth to deliver maximum acceleration. The system also supports up to two additional high-performance PCI-E 4.0 expansion slots for a variety of uses, including high-performance networking connectivity up to 200Gb/s. An additional AIOM slot supports a Supermicro AIOM card or an OCP NIC 3.0 card. These systems will also be NGC-Ready, providing customers a seamlessway to develop and deploy their AI workloads at scale. NGC-Ready systems are validated for functionality and performance for the AI stack from NVIDIA’s NGC registry.

“As the first elastic, multi-instance GPU that unifies training, inference, HPC, and analytics, the NVIDIA A100 is built to handle accelerated workloads of all sizes,” said Paresh Kharya, Director of Product Management for Accelerated Computing at NVIDIA. “NVIDIA A100 accelerated and NGC-Ready servers from Supermicro allows customers great options to accelerate and optimize their data centers for high utilization and low total cost of ownership.”

As the leader in AI system technology, Supermicro offers multi-GPU optimized thermal designs that provide advanced performance and reliability for AI, Deep Learning, and HPC applications. With 1U, 2U, 4U, and 10U rackmount GPU systems; Utra, BigTwin™, and embedded systems supporting GPUs; as well as GPU blade modules for our 8U SuperBlade® , Supermicro offers the industry’s widest and deepest selection of GPU systems to power applications from edge to cloud.