Sebastian Schrittwieser
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
#1about 2 minutes
The rapid adoption of LLMs outpaces security practices
New technologies like large language models are often adopted quickly without established security best practices, creating new vulnerabilities.
#2about 4 minutes
How user input can override developer instructions
A prompt injection occurs when untrusted user input contains instructions that hijack the LLM's behavior, overriding the developer's original intent defined in the context.
#3about 4 minutes
Using prompt injection to steal confidential context data
Attackers can use prompt injection to trick an LLM into revealing its confidential context or system prompt, exposing proprietary logic or sensitive information.
#4about 4 minutes
Expanding the attack surface with plugins and web data
LLM plugins that access external data like emails or websites create an indirect attack vector where malicious prompts can be hidden in that external content.
#5about 2 minutes
Prompt injection as the new SQL injection for LLMs
Prompt injection mirrors traditional SQL injection by mixing untrusted data with developer instructions, but lacks a clear mitigation like prepared statements.
#6about 3 minutes
Why simple filtering and encoding fail to stop attacks
Common security tactics like input filtering and blacklisting are ineffective against prompt injections due to the flexibility of natural language and encoding bypass techniques.
#7about 4 minutes
Using user confirmation and dual LLM models for defense
Advanced strategies include requiring user confirmation for sensitive actions or using a dual LLM architecture to isolate privileged operations from untrusted data processing.
#8about 5 minutes
The current state of LLM security and the need for awareness
There is currently no perfect solution for prompt injection, making developer awareness and careful design of LLM interactions the most critical defense.
Related jobs
Jobs that call for the skills explored in this talk.
Featured Partners
Related Videos
Manipulating The Machine: Prompt Injections And Counter Measures
Georg Dresler
A hundred ways to wreck your AI - the (in)security of machine learning systems
Balázs Kiss
Skynet wants your Passwords! The Role of AI in Automating Social Engineering
Wolfgang Ettlinger & Alexander Hurbean
Prompt Injection, Poisoning & More: The Dark Side of LLMs
Keno Dreßel
You click, you lose: a practical look at VSCode's security
Thomas Chauchefoin & Paul Gerste
Machine Learning: Promising, but Perilous
Nura Kawa
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
From learning to earning
Jobs that call for the skills explored in this talk.


Senior Backend Engineer – AI Integration (m/w/x)
chatlyn GmbH
Vienna, Austria
Senior
JavaScript
AI-assisted coding tools
![Senior Software Engineer [TypeScript] (Prisma Postgres)](https://wearedevelopers.imgix.net/company/283ba9dbbab3649de02b9b49e6284fd9/cover/oKWz2s90Z218LE8pFthP.png?w=400&ar=3.55&fit=crop&crop=entropy&auto=compress,format)

Senior Software Engineer [TypeScript] (Prisma Postgres)
Prisma
Remote
Senior
Node.js
TypeScript
PostgreSQL
Security-by-Design for Trustworthy Machine Learning Pipelines
Association Bernard Gregory
Machine Learning
Continuous Delivery
AI Engineer Security
Paradigma Digital
Municipality of Madrid, Spain
API
Azure
Python
FastAPI
Computer Vision
+3
Internships on hardware/microarchitectural security of deep/machine learning implementations
Inria
Canton of Rennes-4, France
Remote
C++
GIT
Linux
Python
+3
AI/ Machine Learning Engineer NLP / LLM - Contract
Involved Solutions LTD.
Manchester, United Kingdom
Remote
€117-130K
Machine Learning
Natural Language Processing
Agentic AI Architect - Python, LLMs & NLP
FRG Technology Consulting
Intermediate
Azure
Python
Machine Learning


