Liran Tal

Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools

Your AI coding assistant is trained on flawed public code. Learn how it might be suggesting critical vulnerabilities like path traversal and XSS into your project.

Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
#1about 5 minutes

How simple code can hide critical vulnerabilities

A real-world NoSQL injection vulnerability in the popular Rocket.Chat project demonstrates how easily security flaws are overlooked in everyday development.

#2about 3 minutes

The evolution of how developers source their code

Developer workflows have shifted from copying code from Stack Overflow to using npm packages and now to relying on AI-generated code from tools like ChatGPT.

#3about 3 minutes

Understanding the fundamental security risks in AI models

AI models introduce unique security challenges, including data poisoning, a lack of explainability, and vulnerability to malicious user inputs.

#4about 2 minutes

When commercial chatbots are misused for coding tasks

Examples from Amazon and Expedia show how publicly exposed LLM-powered chatbots can be prompted to perform tasks far outside their intended scope, like writing code.

#5about 8 minutes

How AI code generators create common security flaws

AI tools like ChatGPT can generate functional but insecure code, introducing common vulnerabilities such as path traversal and command injection that developers might miss.

#6about 3 minutes

AI suggestions can create software supply chain risks

LLMs may hallucinate non-existent packages or recommend outdated libraries, creating opportunities for attackers to publish malicious packages and initiate supply chain attacks.

#7about 8 minutes

Context-blind vulnerabilities from IDE coding assistants

AI coding assistants can generate correct-looking but contextually insecure code, such as using the wrong sanitization method for HTML attributes, leading to XSS vulnerabilities.

#8about 1 minute

How AI assistants amplify insecure coding patterns

AI coding tools learn from the existing project codebase, meaning they will replicate and amplify any insecure patterns or bad practices already present.

#9about 1 minute

Mitigating AI risks with security tools and awareness

To counter AI-generated vulnerabilities, developers should use resources like the OWASP Top 10 for LLMs and integrate security scanning tools directly into their IDE.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

Related Articles

View all articles
CH
Chris Heilmann
With AIs wide open - WeAreDevelopers at All Things Open 2025
Last week our VP of Developer Relations, Chris Heilmann, flew to Raleigh, North Carolina to present at All Things Open . An excellent event he had spoken at a few times in the past and this being the “Lucky 13” edition, he didn’t hesitate to come and...
With AIs wide open - WeAreDevelopers at All Things Open 2025
CH
Chris Heilmann
Exploring AI: Opportunities and Risks for Developers
In today's rapidly evolving tech landscape, the integration of Artificial Intelligence (AI) in development presents both exciting opportunities and notable risks. This dynamic was the focus of a recent panel discussion featuring industry experts Kent...
Exploring AI: Opportunities and Risks for Developers
CH
Chris Heilmann
Dev Digest 138 - Are you secure about this?
Hello there! This is the 2nd "out of the can" edition of 3 as I am on vacation in Greece eating lovely things on the beach. So, fewer news, but lots of great resources. Many around the topic of security. Enjoy! News and ArticlesGoogle Pixel phones t...
Dev Digest 138 - Are you secure about this?

From learning to earning

Jobs that call for the skills explored in this talk.

AI Engineer

AI Engineer

Luminance
Cambridge, United Kingdom

49K
Python