Ankit Patel
WWC24 - Ankit Patel - Unlocking the Future Breakthrough Application Performance and Capabilities with NVIDIA
#1about 3 minutes
Understanding accelerated computing and GPU parallelism
Accelerated computing offloads parallelizable tasks from the CPU to specialized GPU cores, executing them simultaneously for a massive speedup.
#2about 2 minutes
Calculating the cost and power savings of GPUs
While a GPU-accelerated system costs more upfront, it can replace hundreds of CPU systems for parallel workloads, leading to significant cost and power savings.
#3about 4 minutes
Using NVIDIA libraries to easily accelerate applications
NVIDIA provides domain-specific libraries like cuDF that allow developers to accelerate their code, such as pandas dataframes, with minimal changes.
#4about 3 minutes
Shifting from traditional code to AI-powered logic
Modern AI development replaces complex, hard-coded logic with prompts to large language models, changing how developers implement functions like sentiment analysis.
#5about 3 minutes
Composing multiple AI models for complex tasks
Developers can now create sophisticated applications by chaining multiple AI models together, such as using a vision model's output to trigger an LLM that calls a tool.
#6about 2 minutes
Deploying enterprise AI applications with NVIDIA NIM
NVIDIA NIM provides enterprise-grade microservices for deploying AI models with features like runtime optimization, stable APIs, and Kubernetes integration.
#7about 4 minutes
Accessing NVIDIA's developer programs and training
NVIDIA offers a developer program with access to libraries, NIMs for local development, and free training courses through the Deep Learning Institute.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
01:12 MIN
Boosting Python performance with the Nvidia CUDA ecosystem
The weekly developer show: Boosting Python with CUDA, CSS Updates & Navigating New Tech Stacks
00:53 MIN
The rise of general-purpose GPU computing
Accelerating Python on GPUs
18:20 MIN
NVIDIA's platform for the end-to-end AI workflow
Trends, Challenges and Best Practices for AI at the Edge
07:36 MIN
Highlighting impactful contributions and the rise of open models
Open Source: The Engine of Innovation in the Digital Age
01:11 MIN
How GPUs evolved from graphics to AI powerhouses
Accelerating Python on GPUs
27:27 MIN
Matching edge AI challenges with NVIDIA's solutions
Trends, Challenges and Best Practices for AI at the Edge
00:48 MIN
The evolution of GPUs from graphics to AI computing
Accelerating Python on GPUs
12:45 MIN
Introducing the CUDA parallel computing platform
Accelerating Python on GPUs
Featured Partners
Related Videos
Your Next AI Needs 10,000 GPUs. Now What?
Anshul Jindal & Martin Piercy
How AI Models Get Smarter
Ankit Patel
Generative AI power on the web: making web apps smarter with WebGPU and WebNN
Christian Liebel
Accelerating Python on GPUs
Paul Graham
The Future of Computing: AI Technologies in the Exascale Era
Stephan Gillich, Tomislav Tipurić, Christian Wiebus & Alan Southall
AI: Superhero or Supervillain? How and Why with Scott Hanselman
Scott Hanselman
AI Factories at Scale
Thomas Schmidt
Accelerating Python on GPUs
Paul Graham
From learning to earning
Jobs that call for the skills explored in this talk.


AI Software Engineer - Big Data Pipelines & ML Automation | Python, C#, C++ Expert | Machine Learning Engineer in Manufacturing
Imnoo
Remote
Senior
C++
ETL
.NET
REST
+26





Working Student - AI/Software Engineer - (GenAI Platforms)
Siemens AG
Azure
Python
TypeScript
Machine Learning

