Stephen Jones
Coffee with Developers - Stephen Jones - NVIDIA
#1about 2 minutes
Gaining perspective by using the products you build
Transitioning from a creator to a user of CUDA provides critical insights and humility by revealing the incorrect assumptions made during development.
#2about 3 minutes
Understanding CUDA as a complete computing platform
CUDA has evolved from a low-level language into a comprehensive platform of compilers, libraries, and SDKs that enable GPU access for multiple languages.
#3about 2 minutes
Supporting legacy languages like Fortran for scientific computing
CUDA supports languages like Fortran to accelerate existing codebases in supercomputing for fields such as physics and weather forecasting.
#4about 4 minutes
Why Python became the dominant language for AI
Python's large ecosystem, developer productivity, and vast talent pool made it the de facto language for AI, creating new challenges for parallel computing platforms.
#5about 3 minutes
The challenge of aligning long hardware and short software cycles
Developing new chips takes years of predictive work, creating a challenge to meet the rapidly changing demands of software, especially in the AI space.
#6about 3 minutes
How unexpected user adoption drives technological evolution
Technology evolves organically as users find novel applications for existing tools, such as using gaming GPUs for scientific computing and AI.
#7about 3 minutes
Why AI optimizations increase the demand for compute
Advances that make AI models cheaper or more efficient don't reduce overall compute demand; instead, they enable the creation of even larger and more powerful models.
#8about 3 minutes
The end of Moore's Law is a power consumption problem
While transistor density still doubles, the power per transistor is not halving, creating a thermal and power delivery bottleneck for chip performance.
#9about 6 minutes
The future of computing requires scaling out to data centers
Overcoming power limitations requires moving from single-chip optimization to building large, networked, data-center-scale systems with specialized hardware.
#10about 4 minutes
The rise of neural and quantum computing paradigms
The future of computing will be a hybrid model combining classical, neural, and quantum approaches to solve complex problems using the best tool for each task.
#11about 3 minutes
How developers can contribute to the open source CUDA ecosystem
While low-level drivers are proprietary, the vast majority of CUDA's higher-level libraries like Rapids and Cutlass are open source and welcome community contributions.
Related jobs
Jobs that call for the skills explored in this talk.
Wilken GmbH
Ulm, Germany
Senior
Kubernetes
AI Frameworks
+3
ROSEN Technology and Research Center GmbH
Osnabrück, Germany
Senior
TypeScript
React
+3
Matching moments
04:57 MIN
Increasing the value of talk recordings post-event
Cat Herding with Lions and Tigers - Christian Heilmann
01:32 MIN
Organizing a developer conference for 15,000 attendees
Cat Herding with Lions and Tigers - Christian Heilmann
04:49 MIN
Using content channels to build an event community
Cat Herding with Lions and Tigers - Christian Heilmann
03:28 MIN
Why corporate AI adoption lags behind the hype
What 2025 Taught Us: A Year-End Special with Hung Lee
02:44 MIN
Rapid-fire thoughts on the future of work
What 2025 Taught Us: A Year-End Special with Hung Lee
04:27 MIN
Moving beyond headcount to solve business problems
What 2025 Taught Us: A Year-End Special with Hung Lee
03:15 MIN
The future of recruiting beyond talent acquisition
What 2025 Taught Us: A Year-End Special with Hung Lee
03:39 MIN
Breaking down silos between HR, tech, and business
What 2025 Taught Us: A Year-End Special with Hung Lee
Featured Partners
Related Videos
CUDA in Python
Andy Terrel
Accelerating Python on GPUs
Paul Graham
WWC24 - Ankit Patel - Unlocking the Future Breakthrough Application Performance and Capabilities with NVIDIA
Ankit Patel
The weekly developer show: Boosting Python with CUDA, CSS Updates & Navigating New Tech Stacks
Chris Heilmann, Daniel Cranney & Nicole Jeschko
Accelerating Python on GPUs
Paul Graham
Accelerating Python on GPUs
Paul Graham
Your Next AI Needs 10,000 GPUs. Now What?
Anshul Jindal & Martin Piercy
Engineering Mindset in the Age of AI - Gunnar Grosch, AWS
Gunnar Grosch
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.



Nvidia
Sheffield, United Kingdom
Machine Learning

Nvidia
Newcastle upon Tyne, United Kingdom
Machine Learning


Nvidia
Charing Cross, United Kingdom
Machine Learning

Nvidia
Glasgow, United Kingdom
Senior
C++
Python
PyTorch
Red Hat Enterprise Linux - RHEL

Nvidia
Newcastle upon Tyne, United Kingdom
Senior
C++
Python
PyTorch
Red Hat Enterprise Linux - RHEL
