Deepu

Delay the AI Overlords: How OAuth and OpenFGA Can Keep Your AI Agents from Going Rogue

Is your RAG system secretly leaking sensitive data to your LLM? Learn how to stop it with fine-grained authorization before it goes rogue.

Delay the AI Overlords: How OAuth and OpenFGA Can Keep Your AI Agents from Going Rogue
#1about 4 minutes

Understanding the current state of AI security challenges

AI systems often have poor judgment, and the security domain is playing catch-up with the rapid evolution of AI agents and protocols.

#2about 3 minutes

Focusing on key OWASP Top 10 risks for developers

Application developers should focus on mitigating sensitive information disclosure and excessive agency, as these have a large attack surface under their control.

#3about 3 minutes

Why traditional RBAC fails for RAG systems

Traditional role-based access control (RBAC) is insufficient for RAG systems due to dynamic context and complex data relationships, necessitating a fine-grained authorization (FGA) approach.

#4about 5 minutes

Implementing OpenFGA to secure RAG data access

OpenFGA uses authorization models and relationship tuples to filter documents from a vector store, ensuring the LLM only receives data the user is permitted to see.

#5about 2 minutes

Mitigating excessive agency with zero trust tool access

Control an AI agent's tool access at the code level using zero trust principles, applying standard RBAC for simple cases and FGA for granular, user-dependent permissions.

#6about 1 minute

Securing third-party API calls using OAuth federation

Use OAuth 2.0 federation to allow AI agents to call third-party APIs on a user's behalf without handling raw credentials, using a broker to manage access tokens.

#7about 1 minute

Adding human guardrails with asynchronous authorization

Implement human-in-the-loop approvals for high-stakes actions by using the CIBA flow to send asynchronous authorization requests to users for confirmation.

#8about 5 minutes

Demoing step-up authorization and system architecture

A live demo showcases step-up authorization where an agent requests user consent before accessing sensitive data, followed by an overview of the application's architecture.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

Related Articles

View all articles
Daniel Cranney
Dev Digest 210: AI Agents Are Go! Is MCP Dead? LLMs Crack Anonymity
Inside last week’s Dev Digest 210 . 🪦 Is MCP already dead? 🐍 Secure snake on the CLI 🏗️ The architecture behind open source LLMs ⚖️ AI companies and governments at odds 🦫 Is Go the best language for AI agents? 🕵️ “Security research” bot hacks Micros...
Dev Digest 210: AI Agents Are Go! Is MCP Dead? LLMs Crack Anonymity
Daniel Cranney
Dev Digest 214: Claude Is Leaking, GitHub Is Listening & Axios Hacked!
Inside last week’s Dev Digest 214 . 🕵️ Claude source code leaked, analysed and re-written in 2 days 🐙 GitHub auto-opts users into feeding their code to train their AI 🌐 Pretext shows how to show complex text rendering in the browser 🤖 How to securin...
Dev Digest 214: Claude Is Leaking, GitHub Is Listening & Axios Hacked!

From learning to earning

Jobs that call for the skills explored in this talk.