Liran Tal

Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools

Your AI coding assistant is trained on flawed public code. Learn how it might be suggesting critical vulnerabilities like path traversal and XSS into your project.

Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
#1about 5 minutes

How simple code can hide critical vulnerabilities

A real-world NoSQL injection vulnerability in the popular Rocket.Chat project demonstrates how easily security flaws are overlooked in everyday development.

#2about 3 minutes

The evolution of how developers source their code

Developer workflows have shifted from copying code from Stack Overflow to using npm packages and now to relying on AI-generated code from tools like ChatGPT.

#3about 3 minutes

Understanding the fundamental security risks in AI models

AI models introduce unique security challenges, including data poisoning, a lack of explainability, and vulnerability to malicious user inputs.

#4about 2 minutes

When commercial chatbots are misused for coding tasks

Examples from Amazon and Expedia show how publicly exposed LLM-powered chatbots can be prompted to perform tasks far outside their intended scope, like writing code.

#5about 8 minutes

How AI code generators create common security flaws

AI tools like ChatGPT can generate functional but insecure code, introducing common vulnerabilities such as path traversal and command injection that developers might miss.

#6about 3 minutes

AI suggestions can create software supply chain risks

LLMs may hallucinate non-existent packages or recommend outdated libraries, creating opportunities for attackers to publish malicious packages and initiate supply chain attacks.

#7about 8 minutes

Context-blind vulnerabilities from IDE coding assistants

AI coding assistants can generate correct-looking but contextually insecure code, such as using the wrong sanitization method for HTML attributes, leading to XSS vulnerabilities.

#8about 1 minute

How AI assistants amplify insecure coding patterns

AI coding tools learn from the existing project codebase, meaning they will replicate and amplify any insecure patterns or bad practices already present.

#9about 1 minute

Mitigating AI risks with security tools and awareness

To counter AI-generated vulnerabilities, developers should use resources like the OWASP Top 10 for LLMs and integrate security scanning tools directly into their IDE.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

From learning to earning

Jobs that call for the skills explored in this talk.