Behrad Babaee
Leveraging Moore’s Law: Optimising Database Performance
#1about 4 minutes
The evolution of Moore's Law and its impact on software
Moore's Law drove CPU speed increases until 2005, after which the industry shifted focus from single-core performance to multi-core scalability.
#2about 2 minutes
Comparing server hardware from 2006 to 2024
Modern servers have vastly more RAM and significantly faster storage compared to 2006, fundamentally changing the ratio of memory to disk.
#3about 7 minutes
Traditional database architecture and its reliance on caching
Databases designed for limited RAM now use the extra memory in modern hardware as a cache, which sits on top of the original disk-based architecture.
#4about 6 minutes
The problems and unpredictability of database caching
Caching leads to inconsistent performance across environments and fails to improve overall application latency when multiple parallel queries are involved.
#5about 4 minutes
An alternative architecture with the index in RAM
A modern database design can leverage abundant RAM to hold the entire index in memory, enabling direct, fast access to data on SSDs without a cache.
#6about 4 minutes
Achieving speed and efficiency without caching
By using fewer resources like CPU cycles and disk I/O, an index-in-RAM architecture provides consistently fast performance and reduces infrastructure costs.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
07:52 MIN
The architectural advantage of a SQL-native design
Fault Tolerance and Consistency at Scale: Harnessing the Power of Distributed SQL Databases
34:43 MIN
Answering questions on Cube's architecture and use cases
Making Data Warehouses fast. A developer's story.
04:52 MIN
The critical need for performance in modern applications
In-Memory Computing - The Big Picture
20:28 MIN
The architecture and limitations of in-memory databases
In-Memory Computing - The Big Picture
02:12 MIN
Why GPU acceleration surpasses traditional CPU performance
Accelerating Python on GPUs
15:25 MIN
Using distributed caches to reduce database load
In-Memory Computing - The Big Picture
07:27 MIN
Understanding the fundamental speed of in-memory operations
In-Memory Computing - The Big Picture
12:55 MIN
Optimizing cache efficiency with a dedicated sharded layer
Scaling: from 0 to 20 million users
Featured Partners
Related Videos
In-Memory Computing - The Big Picture
Markus Kett
Database Magic behind 40 Million operations/s
Jürgen Pilz
Leveraging Real time data in FSIs
Tim Faulkes
How building an industry DBMS differs from building a research one
Markus Dreseler
Single Server, Global Reach: Running a Worldwide Marketplace on Bare Metal in a Cloud-Dominated World
Jens Happe
Scaling Databases
Tobias Petry
Things I learned while writing high-performance JavaScript applications
Michele Riva
Scaling: from 0 to 20 million users
Josip Stuhli
From learning to earning
Jobs that call for the skills explored in this talk.
![Senior Software Engineer [TypeScript] (Prisma Postgres)](https://wearedevelopers.imgix.net/company/283ba9dbbab3649de02b9b49e6284fd9/cover/oKWz2s90Z218LE8pFthP.png?w=400&ar=3.55&fit=crop&crop=entropy&auto=compress,format)
Senior Software Engineer [TypeScript] (Prisma Postgres)
Prisma
Remote
Senior
Node.js
TypeScript
PostgreSQL



Senior Software Engineer Database as a Service (DBaaS)
ProfitBricks GmbH
Remote
Senior
Go
Bash
Scrum
Terraform
+3

Principal Backend Architect - Java Refactoring & Modernization
primion Technology GmbH
Remote
API
XML
Java
Spring
+3

Principal Backend Architect - Java Refactoring & Modernization
primion Technology GmbH
Remote
API
XML
Java
Spring
+3


