Sign up or log in to watch the video
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
Sebastian Schrittwieser - 2 years ago
Large-language models (LLM) such as OpenAI's GPT are currently on everybody's mind, and low-cost APIs enable quick and easy integration into applications. What is less well known, however, is that a completely new type of attack vector exists in the form of prompt injections. Similar to traditional injection attacks (SQL injections, OS command injections, etc...) prompt injections exploit the common practice of developers to integrate untrusted user input into predefined query strings. Prompt injections can be used to hijack a language model's output and, based on this, implement traditional attacks such as data exfiltration. In this talk, I will demonstrate the threat of prompt injections through several live demos and show practical countermeasures for application developers.
Jobs with related skills
IT-Secu­rity Admi­nis­trator (m/w/d)
Techniker Krankenkasse
·
26 days ago
Hamburg, Germany
Hybrid
Newest jobs
Solu­tion Archi­tect SAP (m/w/d)
Techniker Krankenkasse
·
2 days ago
Hamburg, Germany
Hybrid
IT Demand & Project Manager
Hubert Burda Media
·
2 days ago
Dresden, Germany
+1
Hybrid
Platform Engineer (m/w/d)
Dirk Rossmann GmbH
·
3 days ago
Burgwedel, Germany
Hybrid
Related Videos