Forget massive LLMs. Mozilla's Tarek Ziadé argues for smaller, specialized AI models that run privately right in your browser.
#1about 2 minutes
The evolving role of the machine learning engineer
The definition of an ML engineer has expanded from a purely scientific role to include developers from diverse backgrounds leveraging new tools.
#2about 4 minutes
How Python became the dominant language for AI
Python's flexible programming paradigms and rich scientific ecosystem made it the preferred language for researchers and ML practitioners.
#3about 3 minutes
Using AI to overcome challenges in systems programming
AI-powered code generation can help developers adopt safer, more performant languages like Rust by lowering the barrier to entry.
#4about 9 minutes
Integrating AI into Firefox while respecting user privacy
Firefox faces the challenge of adding AI features like the "AI Window" while catering to a user base that is highly sensitive to privacy.
#5about 3 minutes
The importance of client-side encryption for AI features
To protect user data, AI features should use client-side encryption to ensure that even servers processing the data cannot read it.
#6about 4 minutes
The hardware requirements for running LLMs locally
Running large language models locally requires expensive, specialized hardware like modern GPUs, creating a barrier for widespread adoption.
#7about 5 minutes
Why specialized models outperform generalist LLMs
Small, specialized models trained for a single task using techniques like distillation can be more accurate and efficient than large, general-purpose models.
#8about 4 minutes
How AI code generators have become more reliable
The reliability of AI code generators has significantly improved as models are now trained on more current and higher-quality datasets.
#9about 7 minutes
The security challenges of building AI browser agents
Creating standards for AI agents that can perform actions in the browser requires solving complex security and user permission challenges.
#10about 8 minutes
Prompt injection as an unsolved AI security problem
The statistical nature of LLMs makes it nearly impossible to guarantee their output, leaving prompt injection as a fundamental and unsolved security risk.
#11about 4 minutes
Building an open source community around AI models
The future of open source AI depends on creating a community around shared models and high-quality, unbiased datasets on platforms like Hugging Face.
Related jobs
Jobs that call for the skills explored in this talk.
Exploring AI: Opportunities and Risks for DevelopersIn today's rapidly evolving tech landscape, the integration of Artificial Intelligence (AI) in development presents both exciting opportunities and notable risks. This dynamic was the focus of a recent panel discussion featuring industry experts Kent...
Panel Discussion: Responsible AI in Practice - Real-World Examples and ChallengesIntroductionIn the ever-evolving landscape of artificial intelligence, the concept of "responsible AI" has emerged as a cornerstone for ethical and practical AI implementation. During the WWC24 Panel discussion, three eminent experts—Mina, Bjorn Brin...
From learning to earning
Jobs that call for the skills explored in this talk.