Alex Soto
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
#1about 5 minutes
Understanding the four main categories of LLM attacks
LLM applications face four primary security risks: availability breakdowns, integrity violations, privacy compromises, and abuse, which can be mitigated using guardrails.
#2about 2 minutes
Protecting models from availability breakdown attacks
Implement input guardrails to enforce token limits and output guardrails to detect non-refusal patterns, preventing denial-of-service and identifying model limitations.
#3about 5 minutes
Ensuring model integrity with content validation guardrails
Use guardrails to filter gibberish, enforce language consistency, block malicious URLs, check for relevance, and manage response length to maintain output quality.
#4about 3 minutes
Understanding and defending against prompt injection attacks
Prompt injection manipulates an AI model by embedding malicious instructions within user input, similar to SQL injection, requiring specific guardrails for detection.
#5about 3 minutes
Protecting sensitive data with privacy guardrails
Use anonymizers like Microsoft Presidio to detect and redact sensitive information such as names and phone numbers from both user inputs and model outputs.
#6about 4 minutes
Preventing model abuse and harmful content generation
Implement guardrails to block code execution, filter competitor mentions, detect toxicity and bias, and defend against 'Do Anything Now' (DAN) jailbreaking attacks.
#7about 4 minutes
Implementing guardrails with a practical code example
A demonstration in Java shows how to create input and output guardrails that use a model to detect violent content and verify URL reachability before processing.
#8about 2 minutes
Addressing unique security risks in RAG systems
Retrieval-Augmented Generation (RAG) introduces new vulnerabilities, such as poisoned documents and vector store attacks, that require specialized security measures.
#9about 2 minutes
Key takeaways for building secure LLM applications
Building trustworthy AI requires a strategic application of guardrails tailored to your specific needs, balancing security with performance to navigate the complex landscape.
Related jobs
Jobs that call for the skills explored in this talk.
Featured Partners
Related Videos
Prompt Injection, Poisoning & More: The Dark Side of LLMs
Keno Dreßel
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
GenAI Security: Navigating the Unseen Iceberg
Maish Saidel-Keesing
AI: Superhero or Supervillain? How and Why with Scott Hanselman
Scott Hanselman
The State of GenAI & Machine Learning in 2025
Alejandro Saucedo
You are not my model anymore - understanding LLM model behavior
Andreas Erben
Manipulating The Machine: Prompt Injections And Counter Measures
Georg Dresler
Data Privacy in LLMs: Challenges and Best Practices
Aditi Godbole
From learning to earning
Jobs that call for the skills explored in this talk.


Senior Backend Engineer – AI Integration (m/w/x)
chatlyn GmbH
Vienna, Austria
Senior
JavaScript
AI-assisted coding tools
Security-by-Design for Trustworthy Machine Learning Pipelines
Association Bernard Gregory
Machine Learning
Continuous Delivery
Agentic AI Architect - Python, LLMs & NLP
FRG Technology Consulting
Intermediate
Azure
Python
Machine Learning
AI/ML Team Lead - Generative AI (LLMs, AWS)
Provectus
Canton de Saint-Mihiel, France
Remote
€96K
Senior
Python
PyTorch
TensorFlow
+4
Data Engineer - Machine Learning | Fraud & Abuse
DeepL
Charing Cross, United Kingdom
Remote
€40K
.NET
Python
Machine Learning
AI/ML Team Lead - Generative AI (LLMs, AWS)
Provectus
Canton de Saint-Mihiel, France
Remote
€96K
Senior
Python
PyTorch
TensorFlow
+4





