Migrating your AI workloads from CPUs to modern GPUs can slash your energy costs by up to 98%.
#1about 2 minutes
The history and origins of the AI company Amber
Amber's journey began in 2006 as Fluid Dyna, an early Nvidia partner for GPU-accelerated code, before being acquired by Altair and later re-established as an independent AI infrastructure company.
#2about 3 minutes
Key milestones in the evolution of AI and GPU computing
The release of CUDA, AlexNet, and Transformers led to an exponential increase in compute demand, culminating in the public adoption of AI with ChatGPT.
#3about 2 minutes
Understanding the business impact and adoption of generative AI
Generative AI presents a massive business opportunity with a high return on investment, driving rapid adoption across major enterprises.
#4about 1 minute
Comparing supercomputer hardware from the past decade
A modern Nvidia DGX H100 system vastly outperforms a state-of-the-art supercomputer from a decade ago while consuming only a fraction of the power and space.
#5about 2 minutes
Why modern GPUs are more energy efficient than CPUs
Replacing legacy CPU-based systems with modern GPUs can reduce energy consumption by up to 98%, and newer GPU generations like Blackwell offer a 4x power reduction over previous models for the same task.
#6about 2 minutes
The shift to production will cause an explosion in compute demand
As generative AI moves from experimentation to production, the demand for compute resources is expected to increase by at least 8 to 10 times, driven primarily by inference workloads.
#7about 3 minutes
Building an AI factory with all the essential components
A successful AI factory requires more than just GPUs; it needs a holistic approach including specialized storage, high-speed networking, management software, and robust data center infrastructure.
#8about 5 minutes
Key software considerations for managing an AI cluster
Effective AI cluster management requires software for optimizing the stack, synchronizing images, monitoring health and performance, integrating with the cloud, and providing chargeback reporting.
#9about 1 minute
Why specialized high-performance storage is critical for AI
AI workloads demand specialized, high-performance storage to handle tasks like rapid LLM checkpointing and high I/O for inference, making legacy storage solutions inadequate.
#10about 3 minutes
Future trends in AI models and data center cooling
The future of AI involves both small specialized models and large general models, driving a necessary evolution in data centers towards direct liquid and immersion cooling to manage heat.
Related jobs
Jobs that call for the skills explored in this talk.
Panel Discussion: Responsible AI in Practice - Real-World Examples and ChallengesIntroductionIn the ever-evolving landscape of artificial intelligence, the concept of "responsible AI" has emerged as a cornerstone for ethical and practical AI implementation. During the WWC24 Panel discussion, three eminent experts—Mina, Bjorn Brin...
Daniel Cranney
Stephan Gillich - Bringing AI EverywhereIn the ever-evolving world of technology, AI continues to be the frontier for innovation and transformation. Stephan Gillich, from the AI Center of Excellence at Intel, dove into the subject in a recent session titled "Bringing AI Everywhere," sheddi...
Chris Heilmann
With AIs wide open - WeAreDevelopers at All Things Open 2025Last week our VP of Developer Relations, Chris Heilmann, flew to Raleigh, North Carolina to present at All Things Open . An excellent event he had spoken at a few times in the past and this being the “Lucky 13” edition, he didn’t hesitate to come and...