Stephan Gillich, Tomislav Tipurić, Christian Wiebus & Alan Southall

The Future of Computing: AI Technologies in the Exascale Era

What if training a single AI model required its own power station? Discover the hybrid architectures and edge devices building a more sustainable future for computing.

The Future of Computing: AI Technologies in the Exascale Era
#1about 3 minutes

Defining exascale computing and its relevance for AI

Exascale computing, originating from high-performance computing benchmarks, offers massive floating-point operation capabilities that are highly relevant for training large AI models.

#2about 4 minutes

Comparing GPU and CPU architectures for deep learning

GPUs excel at AI tasks due to their specialized, parallel processing of matrix operations, while CPUs are being enhanced with features like Advanced Matrix Extensions to also handle these workloads.

#3about 2 minutes

Implementing machine learning on resource-constrained edge devices

Machine learning is becoming essential on edge devices to improve data quality and services, requiring specialized co-processors to achieve performance within strict power budgets.

#4about 4 minutes

Addressing the growing power consumption of AI computing

The massive energy demand of data centers for AI training is a major challenge, addressed by improving grid-to-core power efficiency and offloading computation to the edge.

#5about 1 minute

Key security considerations for AI systems and edge devices

Securing AI systems involves a multi-layered approach including secure boot, safe updates, certificate management, and ensuring the trustworthiness of the AI models themselves.

#6about 5 minutes

Leveraging open software and AI for code development

Open software stacks enable hardware choice, while development tools and large language models can be used to automatically optimize code for better performance on specific platforms.

#7about 8 minutes

Exploring future computing architectures and industry collaboration

The future of computing will be shaped by power efficiency challenges, leading to innovations in materials like silicon carbide, alternative architectures like neuromorphic computing, and cross-industry partnerships.

#8about 3 minutes

Balancing distributed edge AI with centralized cloud computing

A hybrid architecture that balances local processing on the edge with centralized cloud resources is the most practical approach for AI, optimizing for latency, power, and data privacy based on the specific use case.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

From learning to earning

Jobs that call for the skills explored in this talk.

AI Architect

AI Architect

Pexon Consulting

Remote
36K
REST
Azure
Python
+9