Luke Hinds

Securing AI Agents from the Ground Up - Luke Hinds

Users were getting hacked by their own AI agents. So Luke Hinds built `nono`—a simple tool to sandbox them and prevent system compromise.

Securing AI Agents from the Ground Up - Luke Hinds
#1about 4 minutes

Why open source is the best model for security software

Open source provides transparency for code reviews, fosters collaboration with diverse experts, and prevents vendor lock-in for businesses.

#2about 6 minutes

Navigating security risks in the current AI gold rush

The rapid pace of AI development often pushes security to a lower priority, creating risks as non-technical users are given powerful, low-level system access.

#3about 5 minutes

Understanding the practical challenges of agentic AI

Agentic AI is in an exploratory phase where it is often misapplied to problems that have simpler, more traditional solutions.

#4about 9 minutes

Introducing nono for secure AI agent sandboxing

The nono project provides a simple, easy-to-use sandbox that uses kernel-level security to isolate AI agents and prevent unauthorized system access.

#5about 11 minutes

A live demo of nono's core security features

This demonstration shows how to use nono from the command line to restrict file access, protect credentials with phantom keys, and roll back unwanted changes made by an agent.

#6about 5 minutes

Advanced controls for dangerous commands and auditing

Nono protects systems by blocking destructive commands like 'rm -rf' by default and provides a secure audit trail of all actions an agent performs.

#7about 13 minutes

How to make security tools easy and widely adopted

Drawing parallels with Let's Encrypt and Sigstore, making security tools free, simple, and user-friendly is the key to achieving widespread adoption.

#8about 3 minutes

Community-driven development and getting started with nono

The success of nono demonstrates the power of building tools that solve real problems observed in developer communities like Discord.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

Related Articles

View all articles
DC
Daniel Cranney
Dev Digest 196: AI Killed DevOps, LLM Political Bias & AI Security
Inside last week’s Dev Digest 196 . ⚖️ Political bias in LLMs 🫣 AI written code causes 1 in 5 security breaches 🖼️ Is there a limit to alternative text on images? 📝 CodeWiki - understand code better 🟨 Long tasks in JavaScript 👻 Scare yourself into n...
Dev Digest 196: AI Killed DevOps, LLM Political Bias & AI Security
DC
Daniel Cranney
Dev Digest 164: AI Agents, AI Blindspots and MCP security problems
Inside last week’s Dev Digest 164 . 📈 State of AI in charts 🕵️‍♂️ What are AI Agents? 🤖 AI blindspots 🙅‍♂️ Everything wrong with MCP ⚠️ Threat modelling GitHub 🤩 Non LLM software trends to be excited about 💻 Caching explained 🐕‍🦺 Keeping Amazon Web ...
Dev Digest 164: AI Agents, AI Blindspots and MCP security problems

From learning to earning

Jobs that call for the skills explored in this talk.

AI Security Engineer

Databricks
Amsterdam, Netherlands

Intermediate
C++
Python
PyTorch
TensorFlow
Machine Learning