Christian Liebel

Prompt API & WebNN: The AI Revolution Right in Your Browser

What if you could run generative AI directly in the browser, no cloud required? Learn how new APIs unlock private, offline-first AI experiences.

Prompt API & WebNN: The AI Revolution Right in Your Browser
#1about 3 minutes

The case for running AI models locally

Cloud-based AI has drawbacks like offline limitations, capacity issues, data privacy concerns, and subscription costs, creating an opportunity for local, on-device models.

#2about 2 minutes

Two primary approaches for browser-based AI

The W3C is exploring two main approaches for on-device AI: "Bring Your Own AI" libraries like WebLLM and low-level APIs like WebNN, alongside experimental "Built-in AI" APIs like the Prompt API.

#3about 3 minutes

Running large language models with WebLLM

The WebLLM library uses WebGPU to download and run open-weight large language models directly in the browser's cache storage, enabling offline chat and data processing.

#4about 1 minute

Solving the model size and storage problem

Large AI models create a storage problem due to browser origin isolation, leading to a proposal for a Cross Origin Storage API to allow models to be shared across different websites.

#5about 2 minutes

Exploring diverse ML workloads with Transformers.js

The Transformers.js library enables various on-device machine learning tasks beyond text generation, such as computer vision and audio processing, as shown in a sketch recognition game.

#6about 4 minutes

Accelerating performance with the WebNN API

The upcoming Web Neural Network (WebNN) API provides direct access to specialized hardware like NPUs, offering a significant performance increase for ML tasks compared to CPU or GPU processing.

#7about 3 minutes

The alternative: Built-in AI and the Prompt API

Google Chrome's experimental built-in AI initiative solves model sharing and performance issues by providing standardized APIs that use a single, browser-managed model like Gemini Nano.

#8about 4 minutes

Exploring the built-in AI API suite

A demonstration of the built-in AI APIs shows how to use the summarizer, language detector, and Prompt API for general LLM tasks directly from JavaScript in the browser.

#9about 4 minutes

Practical use cases for on-device AI

On-device AI can enhance web applications with features like an offline-capable chatbot in an Angular app or a smart form filler that automatically categorizes and inputs user data.

#10about 3 minutes

Building real-time conversational agents

Demonstrations of a multimodal insurance form assistant and a simple on-device conversational agent highlight the potential for creating interactive, real-time user experiences with local AI.

#11about 1 minute

Weighing the pros and cons of on-device AI

On-device AI offers significant advantages in privacy, availability, and cost, but developers must consider the trade-offs in model capability, response quality, and system requirements compared to cloud solutions.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

From learning to earning

Jobs that call for the skills explored in this talk.