Keno Dreßel

Prompt Injection, Poisoning & More: The Dark Side of LLMs

How can a simple chatbot be turned into a hacker? Explore the critical security risks of LLMs, from prompt injection to data poisoning.

Prompt Injection, Poisoning & More: The Dark Side of LLMs
#1about 5 minutes

Understanding and mitigating prompt injection attacks

Prompt injection manipulates LLM outputs through direct or indirect methods, requiring mitigations like restricting model capabilities and applying guardrails.

#2about 6 minutes

Protecting against data and model poisoning risks

Malicious or biased training data can poison a model's worldview, necessitating careful data screening and keeping models up-to-date.

#3about 6 minutes

Securing downstream systems from insecure model outputs

LLM outputs can exploit downstream systems like databases or frontends, so they must be treated as untrusted user input and sanitized accordingly.

#4about 4 minutes

Preventing sensitive information disclosure via LLMs

Sensitive data used for training can be extracted from models, highlighting the need to redact or anonymize information before it reaches the LLM.

#5about 1 minute

Why comprehensive security is non-negotiable for LLMs

Just like in traditional application security, achieving 99% security is still a failing grade because attackers will find and exploit any existing vulnerability.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

From learning to earning

Jobs that call for the skills explored in this talk.