Skip to main content
Dates
Location
San Jose McEnery Convention Center • San Jose, CA

Built to Accelerate AI

Visit Supermicro at GTC 2026

Supermicro returns to NVIDIA GTC 2026 showcasing how NVIDIA platforms become production-ready AI factories. As a long-time NVIDIA partner and GTC Diamond Sponsor, Supermicro will demonstrate the industry's widest portfolio of AI factory building blocks that span training, inference, storage, and edge. See how Supermicro designs, builds, and deploys a total solution that is fully-integrated with NVIDIA's AI software stack and latest platforms.

Some highlights include:

  • Next-generation solutions, based on NVIDIA Vera Rubin architecture
  • Max-density liquid-cooled and air-cooled AI training platforms, featuring NVIDIA HGX B300 systems and NVIDIA GB300 NVL72
  • AI inference and RAG systems optimized for enterprise deployment, including PCIe GPU servers optimized for NVIDIA RTX PRO™ platforms
  • AI data platform storage systems delivering high-throughput, low-latency performance for training, fine-tuning, and inference pipelines
  • Edge and local data center AI systems, bringing intelligence closer to where data is created
  • Validated AI factory reference architectures that connect compute, networking, storage, and software into deployable solutions

Visit Supermicro at GTC 2026 to see how we turn NVIDIA technology into complete, production-ready AI factory solutions!

booth
Supermicro Sessions & Speakers at GTC 2026

EX82279 – Who Owns Intelligence When AI Runs the World? (Presented by Global AI)

Location
San Jose, CA
Date
to
  • Presenter(s):
    • Charles Liang (Supermicro)
    • Sami Issa (Global AI)
Session Details

As AI systems scale from megawatts to gigawatts, the center of gravity is shifting from models to infrastructure. Join three leaders operating at the intersection of compute, systems, and national platforms as they examine how power, cooling, architecture, and deployment choices now determine performance, sovereignty, and control of intelligence. Users can understand why the primary bottlenecks in advanced AI systems have moved from algorithms to physical infrastructure, including power density, cooling, and latency; learn what it takes to deploy AI infrastructure at gigawatt scale without compromising reliability, security, or economic viability; explore how system architecture and integration choices shape determinism, isolation, and trust in large-scale AI environments; and gain insight into how sovereign and enterprise AI deployments differ from traditional hyperscale cloud models in governance and control.

View on GTC Site

S82228 – Distributed AI Computing From AI Factories to the Edge (Presented by Supermicro)

Location
San Jose, CA
Date
to
  • Presenter(s):
    • Thomas Jorgensen (Supermicro)
    • Steve Stein (NVIDIA)
    • Mory Lin (Supermicro)
Session Details

Enterprises are moving from AI experimentation to production, where ROI matters as much as performance. This session presented by Supermicro and NVIDIA explores how AI factories, edge AI, and on-prem deployments work together to deliver scalable, cost-effective AI solutions. We will highlight how systems powered by GPUs such as the NVIDIA RTX PRO 6000/4500 Blackwell Server Edition support inference, visualization, and Retrieval-Augmented Generation (RAG) workloads across data centers and edge environments. The session will also examine how RAG pipelines benefit from tightly integrated GPUs, networking, and storage to deliver low-latency, context-aware results—helping enterprises improve data control, reduce operational costs, and maximize ROI from AI investments.

View on GTC Site

S82227 – Liquid-Cooled AI for Next-Generation Platforms (Presented by Supermicro)

Location
San Jose, CA
Date
to
  • Presenter(s):
    • Alok Srivastava (Supermicro)
    • Steven Huang (Supermicro)
Session Details

As AI factories scale to support larger models and higher cluster densities, liquid cooling has become essential. This session presented by Supermicro and NVIDIA explores how next-generation AI factories are built using liquid-cooled NVIDIA HGX systems and NVL72 rack-scale platforms, enabling dense, high-performance GPU clusters for large-scale AI training. Attendees will gain insight into Supermicro's deployment of these platforms as validated, rack-scale solutions rather than standalone servers. The session will cover key design and deployment considerations, including direct-to-chip cooling and rack-level liquid integration, and show how liquid-cooled total solutions accelerate time-to-online while improving performance and energy efficiency for modern AI factories.

View on GTC Site

Charles Liang, Founder, President & CEO at Supermicro

Charles Liang

Founder, President & CEO @ Supermicro

Sami Issa, Co-Founder, Director & CEO at Global AI

Sami Issa

Co-Founder, Director & CEO @ Global AI

Thomas Jorgensen, Senior Director, Technology Enablement at Supermicro

Thomas Jorgensen

Senior Director, Technology Enablement @ Supermicro

Steve Stein, Senior Product Marketing Manager at NVIDIA

Steve Stein

Senior Product Marketing Manager @ NVIDIA

Mory Lin, Vice President, IoT/Embedded & Edge Computing at Supermicro

Mory Lin

Vice President, IoT/Embedded & Edge Computing @ Supermicro

Alok Srivastava, Director, Solutions Management AI at Supermicro

Alok Srivastava

Director, Solutions Management AI @ Supermicro

Steven Huang, Project Manager, Datacenter Liquid Cooling at Supermicro

Steven Huang

Project Manager, Datacenter Liquid Cooling @ Supermicro

Supermicro, DDN, and NVIDIA bring the AI Factory to Life at GTC 2026

This immersive interactive experience showcases the implementation, use cases and benefits of the AI Factory and the AI Data Platform solution. Complete with use case demos, an AI Factory simulation experience, and a Generative AI robot named AMECA, this experience showcases the benefits of enterprise AI.

  • Supermicro
  • DDN
  • NVIDIA

Contact your Supermicro account representative to arrange a tour if you will be at GTC.