Sebastian Schrittwieser

ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.

Prompt injection is the new SQL injection for AI. Learn how to secure your LLM applications before a malicious prompt takes over your system.

ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
#1about 2 minutes

The rapid adoption of LLMs outpaces security practices

New technologies like large language models are often adopted quickly without established security best practices, creating new vulnerabilities.

#2about 4 minutes

How user input can override developer instructions

A prompt injection occurs when untrusted user input contains instructions that hijack the LLM's behavior, overriding the developer's original intent defined in the context.

#3about 4 minutes

Using prompt injection to steal confidential context data

Attackers can use prompt injection to trick an LLM into revealing its confidential context or system prompt, exposing proprietary logic or sensitive information.

#4about 4 minutes

Expanding the attack surface with plugins and web data

LLM plugins that access external data like emails or websites create an indirect attack vector where malicious prompts can be hidden in that external content.

#5about 2 minutes

Prompt injection as the new SQL injection for LLMs

Prompt injection mirrors traditional SQL injection by mixing untrusted data with developer instructions, but lacks a clear mitigation like prepared statements.

#6about 3 minutes

Why simple filtering and encoding fail to stop attacks

Common security tactics like input filtering and blacklisting are ineffective against prompt injections due to the flexibility of natural language and encoding bypass techniques.

#7about 4 minutes

Using user confirmation and dual LLM models for defense

Advanced strategies include requiring user confirmation for sensitive actions or using a dual LLM architecture to isolate privileged operations from untrusted data processing.

#8about 5 minutes

The current state of LLM security and the need for awareness

There is currently no perfect solution for prompt injection, making developer awareness and careful design of LLM interactions the most critical defense.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

From learning to earning

Jobs that call for the skills explored in this talk.

AI Engineer Security

Paradigma Digital
Municipality of Madrid, Spain

API
Azure
Python
FastAPI
Computer Vision
+3