Ankit Patel
WWC24 - Ankit Patel - Unlocking the Future Breakthrough Application Performance and Capabilities with NVIDIA
#1about 3 minutes
Understanding accelerated computing and GPU parallelism
Accelerated computing offloads parallelizable tasks from the CPU to specialized GPU cores, executing them simultaneously for a massive speedup.
#2about 2 minutes
Calculating the cost and power savings of GPUs
While a GPU-accelerated system costs more upfront, it can replace hundreds of CPU systems for parallel workloads, leading to significant cost and power savings.
#3about 4 minutes
Using NVIDIA libraries to easily accelerate applications
NVIDIA provides domain-specific libraries like cuDF that allow developers to accelerate their code, such as pandas dataframes, with minimal changes.
#4about 3 minutes
Shifting from traditional code to AI-powered logic
Modern AI development replaces complex, hard-coded logic with prompts to large language models, changing how developers implement functions like sentiment analysis.
#5about 3 minutes
Composing multiple AI models for complex tasks
Developers can now create sophisticated applications by chaining multiple AI models together, such as using a vision model's output to trigger an LLM that calls a tool.
#6about 2 minutes
Deploying enterprise AI applications with NVIDIA NIM
NVIDIA NIM provides enterprise-grade microservices for deploying AI models with features like runtime optimization, stable APIs, and Kubernetes integration.
#7about 4 minutes
Accessing NVIDIA's developer programs and training
NVIDIA offers a developer program with access to libraries, NIMs for local development, and free training courses through the Deep Learning Institute.
Related jobs
Jobs that call for the skills explored in this talk.
CARIAD
Berlin, Germany
Junior
Intermediate
Python
C++
+1
Wilken GmbH
Ulm, Germany
Senior
Amazon Web Services (AWS)
Kubernetes
+1
Matching moments
05:12 MIN
Boosting Python performance with the Nvidia CUDA ecosystem
The weekly developer show: Boosting Python with CUDA, CSS Updates & Navigating New Tech Stacks
01:40 MIN
The rise of general-purpose GPU computing
Accelerating Python on GPUs
01:04 MIN
NVIDIA's platform for the end-to-end AI workflow
Trends, Challenges and Best Practices for AI at the Edge
01:37 MIN
Introduction to large-scale AI infrastructure challenges
Your Next AI Needs 10,000 GPUs. Now What?
01:57 MIN
Highlighting impactful contributions and the rise of open models
Open Source: The Engine of Innovation in the Digital Age
03:22 MIN
Using NVIDIA's full-stack platform for developers
Pioneering AI Assistants in Banking
02:21 MIN
How GPUs evolved from graphics to AI powerhouses
Accelerating Python on GPUs
02:44 MIN
Key milestones in the evolution of AI and GPU computing
AI Factories at Scale
Featured Partners
Related Videos
How AI Models Get Smarter
Ankit Patel
Multimodal Generative AI Demystified
Ekaterina Sirazitdinova
Your Next AI Needs 10,000 GPUs. Now What?
Anshul Jindal & Martin Piercy
Generative AI power on the web: making web apps smarter with WebGPU and WebNN
Christian Liebel
Accelerating Python on GPUs
Paul Graham
The Future of Computing: AI Technologies in the Exascale Era
Stephan Gillich, Tomislav Tipurić, Christian Wiebus & Alan Southall
Trends, Challenges and Best Practices for AI at the Edge
Ekaterina Sirazitdinova
AI Factories at Scale
Thomas Schmidt
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.


Microsoft
Cambridge, United Kingdom
C++
Python
Machine Learning

AKDB Anstalt für kommunale Datenverarbeitung in Bayern
Köln, Germany
DevOps
Python
Docker
Terraform
Kubernetes
+2


Advanced Group
München, Germany
Remote
API
C++
Python
OpenGL
+6

Nvidia
Bramley, United Kingdom
£221K
Senior
C++
Azure
Linux
Python
+10

Trinamics
Utrecht, Netherlands
€3-6K
C++
Machine Learning

