Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
Your AI coding assistant is trained on flawed public code. Learn how it might be suggesting critical vulnerabilities like path traversal and XSS into your project.
#1about 5 minutes
How simple code can hide critical vulnerabilities
A real-world NoSQL injection vulnerability in the popular Rocket.Chat project demonstrates how easily security flaws are overlooked in everyday development.
#2about 3 minutes
The evolution of how developers source their code
Developer workflows have shifted from copying code from Stack Overflow to using npm packages and now to relying on AI-generated code from tools like ChatGPT.
#3about 3 minutes
Understanding the fundamental security risks in AI models
AI models introduce unique security challenges, including data poisoning, a lack of explainability, and vulnerability to malicious user inputs.
#4about 2 minutes
When commercial chatbots are misused for coding tasks
Examples from Amazon and Expedia show how publicly exposed LLM-powered chatbots can be prompted to perform tasks far outside their intended scope, like writing code.
#5about 8 minutes
How AI code generators create common security flaws
AI tools like ChatGPT can generate functional but insecure code, introducing common vulnerabilities such as path traversal and command injection that developers might miss.
#6about 3 minutes
AI suggestions can create software supply chain risks
LLMs may hallucinate non-existent packages or recommend outdated libraries, creating opportunities for attackers to publish malicious packages and initiate supply chain attacks.
#7about 8 minutes
Context-blind vulnerabilities from IDE coding assistants
AI coding assistants can generate correct-looking but contextually insecure code, such as using the wrong sanitization method for HTML attributes, leading to XSS vulnerabilities.
#8about 1 minute
How AI assistants amplify insecure coding patterns
AI coding tools learn from the existing project codebase, meaning they will replicate and amplify any insecure patterns or bad practices already present.
#9about 1 minute
Mitigating AI risks with security tools and awareness
To counter AI-generated vulnerabilities, developers should use resources like the OWASP Top 10 for LLMs and integrate security scanning tools directly into their IDE.
Related jobs
Jobs that call for the skills explored in this talk.
With AIs wide open - WeAreDevelopers at All Things Open 2025Last week our VP of Developer Relations, Chris Heilmann, flew to Raleigh, North Carolina to present at All Things Open . An excellent event he had spoken at a few times in the past and this being the “Lucky 13” edition, he didn’t hesitate to come and...
Daniel Cranney
Dev Digest 196: AI Killed DevOps, LLM Political Bias & AI SecurityInside last week’s Dev Digest 196 .
⚖️ Political bias in LLMs
🫣 AI written code causes 1 in 5 security breaches
🖼️ Is there a limit to alternative text on images?
📝 CodeWiki - understand code better
🟨 Long tasks in JavaScript
👻 Scare yourself into n...
Chris Heilmann
Exploring AI: Opportunities and Risks for DevelopersIn today's rapidly evolving tech landscape, the integration of Artificial Intelligence (AI) in development presents both exciting opportunities and notable risks. This dynamic was the focus of a recent panel discussion featuring industry experts Kent...
From learning to earning
Jobs that call for the skills explored in this talk.