Vision for Websites: Training Your Frontend to See
Build web apps that see. Learn how to implement powerful visual search with vector embeddings in just a few lines of code.
#1about 1 minute
Defining vision as the ability to deduce and understand
The concept of vision for websites is redefined from simply seeing to the ability to deduce, understand, and act on information.
#2about 4 minutes
Demo of a multimodal e-commerce search application
A live demonstration showcases an e-commerce store where users can search for products using both text queries and by uploading images.
#3about 2 minutes
What is multimodality in artificial intelligence?
Multimodality enables search queries to use multiple media types like text, images, and audio to capture more context and improve user interaction.
#4about 2 minutes
Why multimodal AI creates richer user experiences
Multimodal interfaces provide more natural and context-aware interactions, moving beyond simple keyword searches to a more intuitive experience.
#5about 4 minutes
Differentiating generative AI from embedding models
Embedding models encapsulate information into numerical representations (vectors), unlike generative models which create new data.
#6about 4 minutes
How vector search works by measuring distance
Vector search operates by converting a query into an embedding and finding the closest, most semantically similar items in a multidimensional space.
#7about 2 minutes
Creating a unified space for multimodal search
Different data types like text, images, and audio are processed by specific encoders and plotted into a single, unified vector space for cross-modal queries.
#8about 9 minutes
Implementing text-based image search with Weaviate
A code walkthrough demonstrates how to build a text-to-image search feature using a Next.js frontend and a Weaviate backend with a `nearText` query.
#9about 4 minutes
Implementing visual search with an image query
The code for an image-to-image search is explained, showing how a base64 image is sent to the backend to perform a `nearImage` vector search.
#10about 2 minutes
Expanding vision to other creative applications
Beyond e-commerce, multimodal vision can be applied to creative use cases like movie recommenders, educational tools, and map navigation.
Related jobs
Jobs that call for the skills explored in this talk.
Dev Digest 215: Agent Memory, JS2026, Googlebot Analysis & Canvas❤️HTMLInside last week’s Dev Digest 215 .
🗿 Make AI talk like a caveman
🧠 A guide to context engineering for LLMs
🤖 Simon Willison on agentic engineering
🔐 Axios supply chain attack post mortem
🛡️ Designing AI agents to resist prompt injection
🎨 HTML in c...
Daniel Cranney
The State of WebDev AI 2025 Results: What Can We Learn?Introduction
The 2025 edition of The State of WebDev AI offers a detailed snapshot of how developers are using AI today, which tools have gained the most traction over the past year, and what these trends suggest about the future of the industry.
In...
Chris Heilmann
Dev Digest 151: SEO in an AI world, security fixes and Doomed PDFsInside last week’s Dev Digest 151 .
🔎 How ChatGPT compares to search and what that means for SEO
✂️ Job cuts across the board as companies curb DEI programs
🟨 @Microsoft releases 161 Windows security updates
⚠️ @Google’s OAuth bug endangers million...