How Gatsby Cloud's real-time streaming architecture drives <5 second builds
What if your content updates could go live in under two seconds? Discover the streaming architecture that moves beyond the limits of traditional SSG and SSR.
#1about 5 minutes
Understanding how to maintain and update derived data
HTML is a form of derived data that must be updated whenever its source content or code changes.
#2about 6 minutes
Comparing batch processing and application caching models
Batch processing is efficient for large datasets but has high latency, while application caching offers fresher data at a higher computational cost.
#3about 8 minutes
Mapping batch processing to SSG and caching to SSR
Static site generators (SSG) follow a batch processing model, while server-side rendering (SSR) uses on-demand generation with application caching.
#4about 5 minutes
Learning from database indexes and stream processing
Database indexes provide a model for maintaining derived data by reacting directly and efficiently to individual data change events.
Gatsby models the flow of data from a CMS through nodes and pages, using GraphQL to declare dependencies for automatic and precise updates.
#6about 4 minutes
Demonstrating sub-second builds with Gatsby Cloud
A live demonstration shows how Gatsby Cloud rebuilds and deploys a site in under two seconds by reacting instantly to a CMS content change.
#7about 3 minutes
Analyzing the scaling challenges of SSG and SSR
As sites grow, SSGs become too slow for updates, while SSR faces risks from traffic spikes and requires over-provisioning or serving stale data.
#8about 2 minutes
Using frameworks for automatic cache invalidation
Frameworks with first-class data models like Gatsby or Drupal can automate cache invalidation, which is faster and less error-prone than manual or TTL-based approaches.
#9about 5 minutes
Scaling Gatsby builds with vertical and horizontal scaling
Gatsby improves initial build times by first scaling vertically to use all CPU cores on one machine, and then horizontally by distributing work across multiple machines.
Related jobs
Jobs that call for the skills explored in this talk.
Dev Digest 215: Agent Memory, JS2026, Googlebot Analysis & Canvas❤️HTMLInside last week’s Dev Digest 215 .
🗿 Make AI talk like a caveman
🧠 A guide to context engineering for LLMs
🤖 Simon Willison on agentic engineering
🔐 Axios supply chain attack post mortem
🛡️ Designing AI agents to resist prompt injection
🎨 HTML in c...
SciChart
The Fastest JavaScript Charts - Built for React and BeyondFor most developers, browser charting works fine — until it doesn’t. Once you push beyond tens of thousands of points, add live streaming, or need advanced interactions, the story changes: frame drops, frozen dashboards, memory issues.
That’s where S...