メインコンテンツに移動
Supermicro AI & Deep Learning Solution

Embrace AI with Supermicro Deep Learning technology

Deep Learning, a subset of Artificial Intelligence (AI) and Machine Learning (ML), is the state-of-the-art procedure in Computer Science that implements multi-layered artificial neural networks to accomplish tasks that are too complicated to program. For example, Google Maps processes millions of data points every day to figure out the best route to travel, or to predict the time to arrive at the desired destination. Deep Learning comprises two parts- training and Inference. The training part of Deep Learning involves processing as many data points as possible to make the neural network ‘learn’ the feature on its own and modify itself to accomplish tasks like image recognition, speech recognition, etc. The inference part refers to the process of taking a trained model and using it to make useful predictions and decisions. Both training and inferencing require enormous amounts of computing power to achieve the desired accuracy and precision.

Partner Solutions

Supermicro NGC-Ready Solutions

Supermicro NGC-Ready Systems are certified by NVIDIA to fully support NVIDIA NGC software running on NVIDIA TESLA GPUs, enabling customers to deploy end-to-end AI solutions.

AI & Deep Learning Platform

Our solution offers custom Deep Learning framework installation, so that the end user can directly start deploying Deep Learning projects without any GPU programming. Our solution provides customized installation of deep learning frameworks including TensorFlow, Caffe2, MxNet, Chainer, Microsoft Cognitive Toolkit as well as others.

The Supermicro AI & Deep Learning solution provides a complete AI/ Deep Learning software stack. Below is the software stack offered with the end-to-end fully integrated solution:

AI & Deep Learning Software Stack
Deep Learning Environ­ment Frame­works Caffe, Caffe2, Caffe-MPI, Chainer, Microsoft CNTK, Keras, MXNet, TensorFlow, Theano, PyTorch
Libraries cnDNN, NCCL, cuBLAS
User Access NVIDIA DIGITS
Operating Systems Ubuntu, Docker, NVIDIA Docker

Supermicro AI & Deep Learning Solution Advantages

  • Powerhouse for Computation
    • The Supermicro AI & Deep Learning cluster is powered by Supermicro SuperServer® systems, which are high density and compact powerhouses for computation. The cluster features the latest GPUs from Supermicro partner NVIDIA. Each compute node utilizes NVIDIA® Tesla® V100 GPUs.
  • High Density Parallel Compute
    • Up to 32 GPUs with up to 1TB of GPU memory for maximum parallel compute performance resulting in reduced training time for Deep Learning workloads.
  • Increased Bandwidth with NVLink
    • Utilizes NVLink™, which enables faster GPU-GPU communication, further enhancing system performance under heavy Deep Learning workloads.
  • Faster Processing with Tensor Core
    • NVIDIA Tesla V100 GPUs utilize the Tensor Core architecture. Tensor cores contain Deep Learning support and can deliver up to 125 Tensor TFLOPS for training and inference applications.
  • Scalable Design
    • Scale-out architecture with 100G IB EDR fabric, extremely scalable to fit future growth.
  • Rapid Flash Xtreme (RFX) – High performance All-flash NVMe storage
    • RFX is the top-of-the-line complete storage system, developed and completely tested for AI & Deep Learning applications that incorporate the Supermicro BigTwin™ along with WekaIO parallel filing system.

AI & Deep Learning Reference Architecture Configuration

Supermicro is currently offering the following complete solutions that are thoroughly tested and ready-to-go. These clusters can be scaled up & down to meet the needs of your Deep Learning projects.

  14U Rack Solution 24U Rack Solution
  14U Rack 24U Rack
Product SKU SRS-14UGPU-AIV1-01 SRS-24UGPU-AIV1-01
Compute Capability 2PFLOPS (GPU FP16) 4PFLOPS (GPU FP16)
Compute Node 2 SYS-4029GP-TVRT 4 SYS-4029GP-TVRT
Total GPUs 16 NVIDIA® Tesla® V100 SXM2 32GB HBM 32 NVIDIA® Tesla® V100 SXM2 32GB HBM
Total GPU Memory 512GB HBM2 1TB HBM2
Total CPU 4 Intel® Xeon® Gold 6154, 3.00GHz, 18-cores 8 Intel® Xeon® Gold 6154, 3.00GHz, 18-cores
Total System Memory 768GB DDR4-2666MHz ECC 3TB DDR4-2666MHz ECC
Networking InfiniBand EDR 100Gbps; 10GBASE-T Ethernet InfiniBand EDR 100Gbps; 10GBASE-T Ethernet
Total Storage* 15.2TB (8 SATA3 SSDs) 30.4TB (16 SATA3 SSDs)
Operating System Ubuntu Linux OS or CentOS Linux Ubuntu Linux OS or CentOS Linux
Software Caffe, Caffe2, Digits, Inference Server, PyTorch, NVIDIA® CUDA®, NVIDIA® TensorRT™, Microsoft Cognitive Toolkit (CNKT), MXNet, TensorFlow, Theano, and Torch Caffe, Caffe2, Digits, Inference Server, PyTorch, NVIDIA® CUDA®, NVIDIA® TensorRT™, Microsoft Cognitive Toolkit (CNKT), MXNet, TensorFlow, Theano, and Torch
Max Power Usage 7.2kW (7,200W) 14.0kW (14,000kW)
Dimensions 14 Rack Units, 600 x 800 x 1000 (mm, W x H x D) 24 Rack Units, 598 x 1163 x 1000 (mm, W x H x D)
Supermicro AI & Deep Learning Solution Ready Server Platforms
Up to 4 NVIDIA Tesla V100 SXM2 GPUs Up to 300 GB/s GPU-to-GPU NVLINK
  • HPC, Artificial Intelligence, Big Data Analytics, Research Lab, Astrophysics, Business Intelligence
  • Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; dual UPI up to 10.4GT/s
  • 12 DIMMs; up to 3TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM
  • Supports Intel® Optane™ DCPMM*
  • 2 Hot-swap 2.5" drive bays, 2 Internal 2.5" drive bays
  • 4 PCI-E 3.0 x16 slots
  • 2x 10GBase-T ports via Intel X540, 1 Dedicated IPMI port
  • 1 VGA, 2 COM, 2 USB 3.0 (rear)
  • 7x 4cm heavy duty counter-rotating fans with air shroud
  • 2000W Redundant Titanium Level (96%) Power Supplies

*Contact your Supermicro sales rep for more info.

Up to 8 NVIDIA Tesla V100 SXM2 GPUs Up to 300 GB/s GPU-to-GPU NVLINK
  • Artificial Intelligence, Big Data Analytics, High-performance Computing, Research Lab/National Lab, Astrophysics, Business Intelligence
  • Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; 3 UPI up to 10.4GT/s
  • 24 DIMMs; up to 6TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM
  • Supports Intel® Optane™ DCPMM*
  • 16 Hot-swap 2.5" drive bays (support 8 NVMe drives)
  • 4 PCI-E 3.0 x16 (LP, GPU tray for GPUDirect RDMA), 2 PCI-E 3.0 x16 (LP, CPU tray)
  • 2x 10GBase-T ports via Intel X540, 1 Dedicated IPMI port
  • 1 VGA, 1 COM, 2 USB 3.0 (front)
  • 8x 92mm cooling fans, 4x 80mm cooling fans
  • 2200W (2+2) Redundant Titanium Level (96%) Power Supplies

*Contact your Supermicro sales rep for more info.

Up to 20 single-width GPUs
  • AI/Deep Learning, Video Transcoding
  • Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; 3 UPI up to 10.4GT/s
  • 24 DIMMs; up to 6TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM
  • Supports Intel® Optane™ DCPMM*
  • 24 Hot-swap 3.5" drive bays, 2 optional 2.5" U.2 NVMe drives
  • 20 PCI-E 3.0 x16 slots, 1 PCI-E 3.0 x8 (FHFL, in x16 slot)
  • 2x 10GBase-T ports via Intel C622, 1 Dedicated IPMI port
  • 1 VGA, 1 COM, 4 USB 3.0 (rear)
  • 8x 92mm RPM Hot-Swappable Cooling Fans
  • 2000W (2+2) Redundant Titanium Level (96%) Power Supplies

*Contact your Supermicro sales rep for more info.

Up to 16 V100 SXM3 GPUs
  • AI/Deep Learning, High-performance Computing
  • Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; 3 UPI up to 10.4GT/s
  • 24 DIMMs; up to 6TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM
  • Supports Intel® Optane™ DCPMM*
  • 16 Hot-swap 2.5" NVMe drive bays, 6 Hot-swap 2.5" SATA3 drive bays
  • 16 PCI-E 3.0 x16 slots for RDMA via IB EDR, 2 PCI-E 3.0 x16 on board
  • 2x 10GBase-T ports via Intel X540, 1 Dedicated IPMI port
  • 1 VGA, 1 COM, 2 USB 3.0 (front)
  • 6x 80mm hot-swap PWM Fans, 8x 92mm Hot-swap Fans
  • 6x 3000W Redundant Titanium Level (96%) Power Supplies

*Contact your Supermicro sales rep for more info.