Mackenzie Jackson
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
#1about 4 minutes
Understanding AI security risks for developers
AI is now part of the software supply chain, and instruction-tuned LLMs like ChatGPT introduce risks when developers trust generated code they don't fully understand.
#2about 2 minutes
How LLM training data impacts code quality
LLMs are often trained on vast, unfiltered datasets like the Common Crawl, which includes public GitHub repositories and Stack Overflow posts of varying quality.
#3about 6 minutes
Understanding and demonstrating prompt injection attacks
Prompt injection uses malicious language to bypass an AI's instructions, as shown in a demo where a simple command hijacks a text summarizer app.
#4about 3 minutes
Attacking an AI email assistant with prompt injection
A malicious email containing a hidden prompt can compromise an AI email assistant, causing it to add malicious links or exfiltrate data without user interaction.
#5about 2 minutes
Strategies for mitigating prompt injection vulnerabilities
Defend against prompt injection by using third-party security agents to analyze I/O or implementing a multi-LLM architecture with privileged and quarantined models.
#6about 6 minutes
Exploiting AI with package hallucination squatting
AI models can invent non-existent software packages, which attackers then create as malicious decoys to trick developers into installing malware via hallucination squatting.
#7about 5 minutes
How attackers use AI to refactor exploits
Attackers use purpose-built malicious AI models to refactor old exploits, making them effective again, and to create highly convincing spearphishing campaigns.
#8about 2 minutes
Preventing sensitive data leakage into AI models
Employees often paste sensitive information like API keys into public AI models, creating a risk of data leakage and enabling attackers to extract secrets.
#9about 2 minutes
Final advice on adopting AI tools securely
Instead of banning AI tools, which creates shadow IT risks, focus on developer education, using the right tools for the job, and reinforcing security fundamentals.
Related jobs
Jobs that call for the skills explored in this talk.
Featured Partners
Related Videos
A hundred ways to wreck your AI - the (in)security of machine learning systems
Balázs Kiss
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
AI: Superhero or Supervillain? How and Why with Scott Hanselman
Scott Hanselman
Hacking AI - how attackers impose their will on AI
Mirko Ross
Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
Liran Tal
Prompt Injection, Poisoning & More: The Dark Side of LLMs
Keno Dreßel
Staying Safe in the AI Future
Cassie Kozyrkov
GenAI Security: Navigating the Unseen Iceberg
Maish Saidel-Keesing
From learning to earning
Jobs that call for the skills explored in this talk.
Security-by-Design for Trustworthy Machine Learning Pipelines
Association Bernard Gregory
Machine Learning
Continuous Delivery
AI Security Consultant
IOActive Inc.
Municipality of Madrid, Spain
€125-175K
API
Python
PyTorch
TensorFlow
+1
Full-Stack Engineer - AI Agentic Systems
autonomous-teaming
Potsdam, Germany
Remote
Linux
Redis
React
Python
+7
Security Engineer (AI AppSec)
Ignite Technical Resources
Richmond, United Kingdom
Senior
Azure
DevOps
Continuous Integration





