Keno Dreßel
Prompt Injection, Poisoning & More: The Dark Side of LLMs
#1about 5 minutes
Understanding and mitigating prompt injection attacks
Prompt injection manipulates LLM outputs through direct or indirect methods, requiring mitigations like restricting model capabilities and applying guardrails.
#2about 6 minutes
Protecting against data and model poisoning risks
Malicious or biased training data can poison a model's worldview, necessitating careful data screening and keeping models up-to-date.
#3about 6 minutes
Securing downstream systems from insecure model outputs
LLM outputs can exploit downstream systems like databases or frontends, so they must be treated as untrusted user input and sanitized accordingly.
#4about 4 minutes
Preventing sensitive information disclosure via LLMs
Sensitive data used for training can be extracted from models, highlighting the need to redact or anonymize information before it reaches the LLM.
#5about 1 minute
Why comprehensive security is non-negotiable for LLMs
Just like in traditional application security, achieving 99% security is still a failing grade because attackers will find and exploit any existing vulnerability.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
22:43 MIN
The current state of LLM security and the need for awareness
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
14:26 MIN
Understanding the security risk of prompt injection
The shadows that follow the AI generative models
24:53 MIN
Understanding the security risks of AI integrations
Three years of putting LLMs into Software - Lessons learned
25:33 MIN
AI privacy concerns and prompt engineering
Coffee with Developers - Cassidy Williams -
12:48 MIN
Prompt injection as the new SQL injection for LLMs
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
00:03 MIN
The rapid adoption of LLMs outpaces security practices
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
19:14 MIN
Addressing data privacy and security in AI systems
Graphs and RAGs Everywhere... But What Are They? - Andreas Kollegger - Neo4j
02:05 MIN
How user input can override developer instructions
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
Featured Partners
Related Videos
Manipulating The Machine: Prompt Injections And Counter Measures
Georg Dresler
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
Sebastian Schrittwieser
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
Three years of putting LLMs into Software - Lessons learned
Simon A.T. Jiménez
Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
Liran Tal
Inside the Mind of an LLM
Emanuele Fabbiani
You are not my model anymore - understanding LLM model behavior
Andreas Erben
From learning to earning
Jobs that call for the skills explored in this talk.

AI Enablement Engineer - LLM Integration & Technical Empowerment
KUEHNE + NAGEL
Intermediate
API
Python
Docker
Kubernetes
Continuous Integration
+1

Full-Stack Engineer | Specializing in LLMs & AI Agents
Waterglass
Junior
React
Python
Node.js
low-code
JavaScript

AIML -Machine Learning Research, DMLI
Apple
Python
PyTorch
TensorFlow
Machine Learning
Natural Language Processing



IT-Security Engineer Awarness Training and Security Roadmap
Paris Lodron-Universität Salzburg
Powershell
Windows Server
Microsoft Office
Scripting (Bash/Python/Go/Ruby)


