Liran Tal
Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
#1about 5 minutes
How simple code can hide critical vulnerabilities
A real-world NoSQL injection vulnerability in the popular Rocket.Chat project demonstrates how easily security flaws are overlooked in everyday development.
#2about 3 minutes
The evolution of how developers source their code
Developer workflows have shifted from copying code from Stack Overflow to using npm packages and now to relying on AI-generated code from tools like ChatGPT.
#3about 3 minutes
Understanding the fundamental security risks in AI models
AI models introduce unique security challenges, including data poisoning, a lack of explainability, and vulnerability to malicious user inputs.
#4about 2 minutes
When commercial chatbots are misused for coding tasks
Examples from Amazon and Expedia show how publicly exposed LLM-powered chatbots can be prompted to perform tasks far outside their intended scope, like writing code.
#5about 8 minutes
How AI code generators create common security flaws
AI tools like ChatGPT can generate functional but insecure code, introducing common vulnerabilities such as path traversal and command injection that developers might miss.
#6about 3 minutes
AI suggestions can create software supply chain risks
LLMs may hallucinate non-existent packages or recommend outdated libraries, creating opportunities for attackers to publish malicious packages and initiate supply chain attacks.
#7about 8 minutes
Context-blind vulnerabilities from IDE coding assistants
AI coding assistants can generate correct-looking but contextually insecure code, such as using the wrong sanitization method for HTML attributes, leading to XSS vulnerabilities.
#8about 1 minute
How AI assistants amplify insecure coding patterns
AI coding tools learn from the existing project codebase, meaning they will replicate and amplify any insecure patterns or bad practices already present.
#9about 1 minute
Mitigating AI risks with security tools and awareness
To counter AI-generated vulnerabilities, developers should use resources like the OWASP Top 10 for LLMs and integrate security scanning tools directly into their IDE.
Related jobs
Jobs that call for the skills explored in this talk.
Wilken GmbH
Ulm, Germany
Senior
Kubernetes
AI Frameworks
+3
aedifion GmbH
Köln, Germany
€30-45K
Intermediate
Network Security
Security Architecture
+1
Technoly GmbH
Berlin, Germany
€50-60K
Intermediate
Network Security
Security Architecture
+2
Matching moments
05:55 MIN
The security risks of AI-generated code and slopsquatting
Slopquatting, API Keys, Fun with Fonts, Recruiters vs AI and more - The Best of LIVE 2025 - Part 2
07:39 MIN
Prompt injection as an unsolved AI security problem
AI in the Open and in Browsers - Tarek Ziadé
03:45 MIN
Preventing exposed API keys in AI-assisted development
Slopquatting, API Keys, Fun with Fonts, Recruiters vs AI and more - The Best of LIVE 2025 - Part 2
02:49 MIN
Using AI to overcome challenges in systems programming
AI in the Open and in Browsers - Tarek Ziadé
09:10 MIN
How AI is changing the freelance developer experience
WeAreDevelopers LIVE – AI, Freelancing, Keeping Up with Tech and More
06:33 MIN
The security challenges of building AI browser agents
AI in the Open and in Browsers - Tarek Ziadé
03:07 MIN
Final advice for developers adapting to AI
WeAreDevelopers LIVE – AI, Freelancing, Keeping Up with Tech and More
04:09 MIN
The emerging market for fixing AI-generated code
Devs vs. Marketers, COBOL and Copilot, Make Live Coding Easy and more - The Best of LIVE 2025 - Part 3
Featured Partners
Related Videos
Panel discussion: Developing in an AI world - are we all demoted to reviewers? WeAreDevelopers WebDev & AI Day March2025
Laurie Voss, Rey Bango, Hannah Foxwell, Rizel Scarlett & Thomas Steiner
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
The transformative impact of GenAI for software development and its implications for cybersecurity
Chris Wysopal
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
Let’s write an exploit using AI
Julian Totzek-Hallhuber
Exploring AI: Opportunities and Risks in Development
Angie Jones, Kent C Dobbs, Liran Tal & Chris Heilmann
Prompt Injection, Poisoning & More: The Dark Side of LLMs
Keno Dreßel
From Syntax to Singularity: AI’s Impact on Developer Roles
Anna Fritsch-Weninger
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.


Snyk's Incubation Accelerator
Charing Cross, United Kingdom
Go
Python
Node.js
Microservices
Agile Methodologies
+1



Snyk
Charing Cross, United Kingdom
Senior
Azure
Docker
TypeScript
Kubernetes
Google Cloud Platform
+1

Abnormal AI
Intermediate
API
Spark
Kafka
Python


