Accelerated Building Blocks with Intel GPUs
For Cloud Scale AI Training and Inference
Demand for high-performance AI/Deep Learning (DL) training compute has doubled in size every 3.5 months since 2013 (according to OpenAI) and is accelerating with the growing size of data sets and the number of applications and services based on large language models (LLMs), computer vision, recommendation systems, and more.
With the increased demand for greater training and inference performance, throughput, and capacity, the industry needs purpose-built systems that offer increased efficiency, lower cost, ease of implementation, flexibility to enable customization, and scaling of AI systems. AI has become an essential technology for diverse areas such as copilots, virtual assistants, manufacturing automation, autonomous vehicle operations, and medical imaging, to name a few. Supermicro has partnered with Intel to provide cloud scale system and rack design with Intel Gaudi AI Accelerators.
New Supermicro X14 Gaudi® 3 AI Training and Inference Platform
Bringing choice to the enterprise AI market, the new Supermicro X14 AI training platform is built on the third generation Intel® Gaudi 3 accelerators, designed to further increase the efficiency of large-scale AI model training and AI inferencing. Available in both air-cooled and liquid-cooled configurations, Supermicro's X14 Gaudi 3 solution easily scales to meet a wide range of AI workload requirements.
- GPU: 8 Gaudi 3 HL-325L (air-cooled) or HL-335 (liquid-cooled) accelerators on OAM 2.0 baseboard
- CPU: Dual Intel® Xeon® 6 processors
- Memory: 24 DIMMs - up to 6TB memory in 1DPC
- Drives: Up to 8 hot-swap PCIe 5.0 NVMe
- Power Supplies: 8 3000W high efficiency fully redundant (4+4) Titanium Level
- Networking: 6 on-board OSFP 800GbE ports for scale-out
- Expansion Slots: 2 PCIe 5.0 x16 (FHHL) + 2 PCIe 5.0 x8 (FHHL)
- Workloads: AI Training and Inference
Supermicro Gaudi®2 AI Training Server
Building on the success of the original Supermicro Gaudi AI training system, the Gaudi 2 AI server prioritizes two key considerations: integrating AI accelerators with built-in high-speed networking modules to drive operation efficiency for training state-of-the-art AI models and bringing the AI industry the choice it needs.
- GPU: 8 Gaudi2 HL-225H mezzanine cards
- CPU: Dual 3rd Gen Intel® Xeon® Scalable processors
- Memory: 32 DIMMs - up to 8TB registered ECC DDR4-3200MHz SDRAM
- Drives: up to 24 hot-swap drives (SATA/NVMe/SAS)
- Power: 6x 3000W High efficiency (54V+12V) fully-redundant power supplies
- Networking: 24x 100GbE (48 x 56Gb) PAM4 SerDes Links by 6 QSFP-DDs
- Expansion Slots: 2x PCIe 4.0 switches
- Workloads: AI Training and Inference
For Media Processing & Delivery, Transcoding, Cloud gaming, AI Visual Inferencing
As demand for media and game streaming and visual inferencing continues to increase rapidly, organizations need efficient, scalable solutions that can deliver services to thousands to millions of concurrent users without affecting quality or latency. Supermicro's accelerated computing solutions feature the new Intel Data Center GPU optimized for media and cloud workloads.
Intel GPUs support an open, standards-based software stack optimized for density and quality with critical server capabilities for high reliability, availability and scalability in media processing, media delivery, AI visual inference, cloud gaming and virtualization.
Watch the TechTalk
Supermicro’s Senior Director, Technology Enablement Thomas Jorgensen sits down to discuss the unique advantages of Supermicro systems based on Intel Data Center GPU Flex Series.
Solution Brief
Supermicro and Intel collaborated to deliver outstanding performance for a large-scale cloud gaming platform establishing over 560 1080p @60Hz transcoded streams per system.
4K Streaming Demo
This video demonstrates Supermicro's real-time 4K video streaming solution using Intel Data Center GPU Flex Series, with up to 8 simultanous streams from a single GPU.
Transcoding Optimized
High performance design for maximum media processing performance, with up to 10 GPUs in a 4U chassis
- GPU: Up to 10 Intel® Data Center GPU Flex Series (in PCI-E 4.0 x16)
- CPU: Dual 3rd Gen Intel® Xeon® Scalable Processors
- Memory: 32 DIMMs; up to 8TB, or 12TB with Intel® Optane® Persistent Memory
- Drives: 24x 2.5” hot-swap drive bays (8x NVMe/8x SATA/8x SATA/SAS)
Media Delivery Optimized
Multi-node system with high compute density optimized for media and gaming streaming at the cloud Edge
- GPU: Up to 2 Intel® Data Center GPU Flex Series per node (in PCI-E 4.0 x16)
- CPU: Dual 3rd Gen Intel® Xeon® Scalable Processors per node
- Memory: 20 DIMMs; up to 4TB, or 6TB per node with Intel® Optane® Persistent Memory
- Drives: 6x 2.5” hot-swap NVMe/SATA drive bays per node
Cloud Gaming Optimized
Multi-node system designed for high-density GPU configurations in a 2U form factor
- GPU: Up to 3 Intel® Data Center GPU Flex Series per node (in PCI-E 4.0 x16)
- CPU: Single 3rd Gen Intel® Xeon® Scalable Processor per node
- Memory: 8 DIMMs; up to 2TB per node
- Drives: 2x 2.5” hot-swap U.2 NVMe drive bays per node
Visual Inferencing Optimized
Scalable platform with up to 6 GPUs designed for image AI processing in the cloud
- GPU: Up to 6 Intel® Data Center GPU Flex Series per node (in PCI-E 4.0 x16/x8)
- CPU: Dual 3rd Gen Intel® Xeon® Scalable Processors
- Memory: 16 DIMMs; up to 4TB, or 6TB with Intel® Optane® Persistent Memory
- Drives: 12x 2.5”/3.5” hot-swap NVMe/SAS/SATA hybrid drive bays
Visual Inferencing Optimized
Compact edge platform with up to 2 GPUs
- GPU: Up to 2 Intel® Data Center GPU Flex Series
- CPU: Single 3rd Gen Intel® Xeon® Scalable processor up to 32 cores
- Memory: 8 DIMMs
- Drives: 4x 2.5” internal SATA drive bays
Visual Inferencing Optimized for Edge
1U compact edge platform with up to 2 GPUs
- GPU: Up to 2 Intel® Data Center GPU Flex Series
- CPU: Single 3rd Gen Intel® Xeon® Scalable processor up to 32 cores
- Memory: 8 DIMMs
- Drives: 2x 2.5" drive bays & 1x M.2 NVMe or 1x M.2 SATA3
Supermicro with GAUDI 3 AI Delivers Scalable Performance for AI Requirements
Range of Optimized Solutions for Data Centers of Any Size and Workloads For New Services and Increased Customer Satisfaction
Supermicro and Intel GAUDI 3 Systems Advance Enterprise AI Infrastructure
High_Bandwidth AI System Using Intel Xeon 6 Processors for Efficient LLM and GenAI Training and Inference Across Enterprise Scales
Supermicro X13 Hyper Empowers Enterprise AI Workloads on the VMWARE Platform
Computational AI workload Use Cases: Large Language Model (LLM) and AI Image Recognition - ResNet50 on Intel® Data Center Flex 170 GPU
Accelerating AI Compute With Supermicro Servers In The INTEL® Developer Cloud
Supermicro Advanced AI Servers featuring Intel® Xeon® Processors and Intel® Gaudi® 2 AI Accelerators Bring High-Performance, High-efficiency AI Cloud Compute, Training, and Inferencing to Developers and Enterprises
Superior Media Processing and Delivery Solution Based On Supermicro Servers W/ Intel® Data Center GPU Flex Series
Supermicro Systems with Intel® Data Center GPU Flex Series
Supermicro TECHTalk: New Media Processing Solutions Based on Intel Data Center GPU Flex Series
Watch as our product experts discuss the new Supermicro solutions based on the just announced Intel Data Center GPU Flex Series. Learn how these solutions can help benefit you and your company.
Delivering Scalable Cloud-Gaming
Supermicro Systems with Intel® Data Center GPU Flex Series
Supermicro offers all the system components for cloud service providers to build green, cost-effective, and profitable cloud gaming infrastructure.
Innovative Solutions for Cloud Gaming, Media, Transcoding, & AI Inferencing
Sep 08 2022, 10:00am PDT
Supermicro and Intel product and solution experts will discuss, in an informal session, the benefits of the solutions in the areas of Cloud Gaming, Media Delivery, Transcoding, and AI Inferencing using the recently announced Intel Flex Series GPUs. The webinar will explain the advantages of the Supermicro solutions, the ideal servers and the benefits of using the Intel Flex Series GPUs.
Supermicro and Habana® High-Performance, High-Efficiency AI Training System
Enabling up to 40% better price/performance for Deep Learning training than traditional AI solutions