Liran Tal
Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
#1about 5 minutes
How simple code can hide critical vulnerabilities
A real-world NoSQL injection vulnerability in the popular Rocket.Chat project demonstrates how easily security flaws are overlooked in everyday development.
#2about 3 minutes
The evolution of how developers source their code
Developer workflows have shifted from copying code from Stack Overflow to using npm packages and now to relying on AI-generated code from tools like ChatGPT.
#3about 3 minutes
Understanding the fundamental security risks in AI models
AI models introduce unique security challenges, including data poisoning, a lack of explainability, and vulnerability to malicious user inputs.
#4about 2 minutes
When commercial chatbots are misused for coding tasks
Examples from Amazon and Expedia show how publicly exposed LLM-powered chatbots can be prompted to perform tasks far outside their intended scope, like writing code.
#5about 8 minutes
How AI code generators create common security flaws
AI tools like ChatGPT can generate functional but insecure code, introducing common vulnerabilities such as path traversal and command injection that developers might miss.
#6about 3 minutes
AI suggestions can create software supply chain risks
LLMs may hallucinate non-existent packages or recommend outdated libraries, creating opportunities for attackers to publish malicious packages and initiate supply chain attacks.
#7about 8 minutes
Context-blind vulnerabilities from IDE coding assistants
AI coding assistants can generate correct-looking but contextually insecure code, such as using the wrong sanitization method for HTML attributes, leading to XSS vulnerabilities.
#8about 1 minute
How AI assistants amplify insecure coding patterns
AI coding tools learn from the existing project codebase, meaning they will replicate and amplify any insecure patterns or bad practices already present.
#9about 1 minute
Mitigating AI risks with security tools and awareness
To counter AI-generated vulnerabilities, developers should use resources like the OWASP Top 10 for LLMs and integrate security scanning tools directly into their IDE.
Related jobs
Jobs that call for the skills explored in this talk.
Wilken GmbH
Ulm, Germany
Senior
Kubernetes
AI Frameworks
+3
aedifion GmbH
Köln, Germany
€30-45K
Intermediate
Network Security
Security Architecture
+1
Matching moments
03:13 MIN
How AI can create more human moments in HR
The Future of HR Lies in AND – Not in OR
03:28 MIN
Shifting from talent acquisition to talent architecture
The Future of HR Lies in AND – Not in OR
04:22 MIN
Navigating ambiguity as a core HR competency
The Future of HR Lies in AND – Not in OR
06:04 MIN
The importance of a fighting spirit to avoid complacency
The Future of HR Lies in AND – Not in OR
06:10 MIN
Understanding global differences in work culture and motivation
The Future of HR Lies in AND – Not in OR
06:51 MIN
Balancing business, technology, and people for holistic success
The Future of HR Lies in AND – Not in OR
05:10 MIN
How the HR function has evolved over three decades
The Future of HR Lies in AND – Not in OR
06:59 MIN
Moving from 'or' to 'and' thinking in HR strategy
The Future of HR Lies in AND – Not in OR
Featured Partners
Related Videos
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
Panel discussion: Developing in an AI world - are we all demoted to reviewers? WeAreDevelopers WebDev & AI Day March2025
Laurie Voss, Rey Bango, Hannah Foxwell, Rizel Scarlett & Thomas Steiner
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
The transformative impact of GenAI for software development and its implications for cybersecurity
Chris Wysopal
Exploring AI: Opportunities and Risks in Development
Angie Jones, Kent C Dobbs, Liran Tal & Chris Heilmann
Let’s write an exploit using AI
Julian Totzek-Hallhuber
Prompt Injection, Poisoning & More: The Dark Side of LLMs
Keno Dreßel
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
Sebastian Schrittwieser
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.

GitLab
Manchester, United Kingdom
£131-282K
API
C++
Gitlab
Burp Suite
+1

GitLab
Sheffield, United Kingdom
£131-282K
API
C++
Gitlab
Burp Suite
+1

GitLab
Bristol, United Kingdom
£131-282K
API
C++
Gitlab
Burp Suite
+1

GitLab
Charing Cross, United Kingdom
£131-282K
API
C++
Gitlab
Burp Suite
+1

GitLab
Glasgow, United Kingdom
£131-282K
API
C++
Gitlab
Burp Suite
+1

GitLab
Newcastle upon Tyne, United Kingdom
£131-282K
API
C++
Gitlab
Burp Suite
+1

GitLab
Birmingham, United Kingdom
£131-282K
API
C++
Gitlab
Burp Suite
+1

GitLab
Nottingham, United Kingdom
£131-282K
API
C++
Gitlab
Burp Suite
+1
