Mackenzie Jackson
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
#1about 4 minutes
Understanding AI security risks for developers
AI is now part of the software supply chain, and instruction-tuned LLMs like ChatGPT introduce risks when developers trust generated code they don't fully understand.
#2about 2 minutes
How LLM training data impacts code quality
LLMs are often trained on vast, unfiltered datasets like the Common Crawl, which includes public GitHub repositories and Stack Overflow posts of varying quality.
#3about 6 minutes
Understanding and demonstrating prompt injection attacks
Prompt injection uses malicious language to bypass an AI's instructions, as shown in a demo where a simple command hijacks a text summarizer app.
#4about 3 minutes
Attacking an AI email assistant with prompt injection
A malicious email containing a hidden prompt can compromise an AI email assistant, causing it to add malicious links or exfiltrate data without user interaction.
#5about 2 minutes
Strategies for mitigating prompt injection vulnerabilities
Defend against prompt injection by using third-party security agents to analyze I/O or implementing a multi-LLM architecture with privileged and quarantined models.
#6about 6 minutes
Exploiting AI with package hallucination squatting
AI models can invent non-existent software packages, which attackers then create as malicious decoys to trick developers into installing malware via hallucination squatting.
#7about 5 minutes
How attackers use AI to refactor exploits
Attackers use purpose-built malicious AI models to refactor old exploits, making them effective again, and to create highly convincing spearphishing campaigns.
#8about 2 minutes
Preventing sensitive data leakage into AI models
Employees often paste sensitive information like API keys into public AI models, creating a risk of data leakage and enabling attackers to extract secrets.
#9about 2 minutes
Final advice on adopting AI tools securely
Instead of banning AI tools, which creates shadow IT risks, focus on developer education, using the right tools for the job, and reinforcing security fundamentals.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
14:10 MIN
Managing the fear, accountability, and risks of AI
Collaborative Intelligence: The Human & AI Partnership
00:04 MIN
Understanding the current state of AI security challenges
Delay the AI Overlords: How OAuth and OpenFGA Can Keep Your AI Agents from Going Rogue
08:03 MIN
Managing security risks of AI-assisted code generation
WWC24 - Chris Wysopal, Helmut Reisinger and Johannes Steger - Fighting Digital Threats in the Age of AI
09:25 MIN
Understanding the security risks of AI-generated code
WeAreDevelopers LIVE – Building on Algorand: Real Projects and Developer Tools
19:57 MIN
How AI coding assistants impact developer skills
Navigating the Future of Junior Developers in Tech
24:53 MIN
Understanding the security risks of AI integrations
Three years of putting LLMs into Software - Lessons learned
05:54 MIN
Addressing key challenges in the AI era for developers
The Data Phoenix: The future of the Internet and the Open Web
04:21 MIN
Fundamental AI vulnerabilities and malicious misuse
A hundred ways to wreck your AI - the (in)security of machine learning systems
Featured Partners
Related Videos
A hundred ways to wreck your AI - the (in)security of machine learning systems
Balázs Kiss
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
AI: Superhero or Supervillain? How and Why with Scott Hanselman
Scott Hanselman
Hacking AI - how attackers impose their will on AI
Mirko Ross
Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
Liran Tal
Prompt Injection, Poisoning & More: The Dark Side of LLMs
Keno Dreßel
Staying Safe in the AI Future
Cassie Kozyrkov
GenAI Security: Navigating the Unseen Iceberg
Maish Saidel-Keesing
From learning to earning
Jobs that call for the skills explored in this talk.








