Merrill Lutsky
Evaluating AI models for code comprehension
#1about 2 minutes
The challenge of reviewing exponentially growing AI-generated code
The rapid increase in AI-generated code creates a significant bottleneck in the software development lifecycle, particularly in the code review stage.
#2about 3 minutes
How AI code generation strains the developer outer loop
While AI accelerates code writing (the inner loop), it overwhelms the critical outer loop processes of testing, reviewing, and deploying code.
#3about 1 minute
Introducing Diamond, an AI agent for automated code review
Graphite's AI agent, Diamond, acts as an always-on senior engineer within GitHub to summarize, prioritize, and review every code change.
#4about 3 minutes
Using comment acceptance rate to measure AI review quality
The primary metric for a successful AI reviewer is the acceptance rate of its comments, as every high-signal comment should result in a code change.
#5about 1 minute
Why evaluations are the key lever for LLM performance
Unlike traditional machine learning, optimizing large language models relies heavily on a robust evaluation process rather than other levers like feature engineering.
#6about 2 minutes
A methodology for evaluating AI code comprehension models
Models are evaluated against a large dataset of pull requests using two core metrics: matched comment rate for recall and unmatched comment rate for noise.
#7about 3 minutes
A comparative analysis of GPT-4.0, Opus, and Gemini
A detailed comparison reveals that models like GPT-4.0 excel at precision while Gemini has the best recall, but no single model wins on all metrics.
#8about 2 minutes
Evaluating Sonnet models and the problem of AI noise
Testing reveals that Sonnet 4.0 generates the most noise, making it less suitable for high-signal code review compared to its predecessors.
#9about 2 minutes
Why Sonnet 3.7 offers the best balance for code review
Sonnet 3.7 is the chosen model because it provides the optimal blend of strong reasoning, high recall of important issues, and low generation of noisy comments.
#10about 3 minutes
The critical role of continuous evaluation for new models
The key to leveraging AI effectively is to constantly re-evaluate new models, as shown by preliminary tests on GR four which revealed significant performance gaps.
Related jobs
Jobs that call for the skills explored in this talk.
Featured Partners
Related Videos
How we built an AI-powered code reviewer in 80 hours
Yan Cui
Livecoding with AI
Rainer Stropek
The Limits of Prompting: ArchitectingTrustworthy Coding Agents
Nimrod Kor
New AI-Centric SDLC: Rethinking Software Development with Knowledge Graphs
Gregor Schumacher, Sujay Joshy, Marcel Gocke
Bringing AI Model Testing and Prompt Management to Your Codebase with GitHub Models
Sandra Ahlgrimm, Kevin Lewis
AI: Superhero or Supervillain? How and Why with Scott Hanselman
Scott Hanselman
Google Gemini: Open Source and Deep Thinking Models - Sam Witteveen
Sam Witteveen
Engineering Productivity: Cutting Through the AI Noise
Himanshu Vasishth, Mindaugas Mozūras, Jackie Brosamer, Lukas Pfeiffer
From learning to earning
Jobs that call for the skills explored in this talk.


Senior Backend Engineer – AI Integration (m/w/x)
chatlyn GmbH
Vienna, Austria
Senior
JavaScript
AI-assisted coding tools
Machine Learning Scientist (AI for Code)
SonarSource
Bochum, Germany
Java
Python
PyTorch
TensorFlow
Machine Learning
+1
Machine Learning Scientist (AI for Code)
Sonarsource Sa
Geneva, Switzerland
Java
Python
PyTorch
TensorFlow
Machine Learning
+1
Content Expert, Artificial General Intelligence
Jobs for Humanity
Municipality of Madrid, Spain
XML
HTML
JSON
Python
Data analysis
+1





