Balázs Kiss
A hundred ways to wreck your AI - the (in)security of machine learning systems
#1about 4 minutes
The security risks of AI-generated code
AI systems can generate code quickly but may introduce vulnerabilities or rely on outdated practices, highlighting that all AI systems are fundamentally code and can be exploited.
#2about 5 minutes
Fundamental AI vulnerabilities and malicious misuse
AI systems are prone to classic failures like overfitting and can be maliciously manipulated through deepfakes, chatbot poisoning, and adversarial patterns.
#3about 1 minute
Exploring threat modeling frameworks for AI security
Several organizations like OWASP, NIST, and MITRE provide threat models and standards to help developers understand and mitigate AI security risks.
#4about 6 minutes
Deconstructing AI attacks from evasion to model stealing
Attack trees categorize novel threats like evasion with adversarial samples, data poisoning to create backdoors, and model stealing to replicate proprietary systems.
#5about 2 minutes
Demonstrating an adversarial attack on digit recognition
A live demonstration shows how pre-generated adversarial samples can trick a digit recognition model into misclassifying numbers as zero.
#6about 5 minutes
Analyzing supply chain and framework security risks
Security risks extend beyond the model to the supply chain, including backdoors in pre-trained models, insecure serialization formats like Pickle, and vulnerabilities in ML frameworks.
#7about 1 minute
Choosing secure alternatives to the Pickle model format
The HDF5 format is recommended as a safer, industry-standard alternative to Python's insecure Pickle format for serializing machine learning models.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
03:07 MIN
The dual nature of machine learning's power
Machine Learning: Promising, but Perilous
00:04 MIN
Understanding the current state of AI security challenges
Delay the AI Overlords: How OAuth and OpenFGA Can Keep Your AI Agents from Going Rogue
13:54 MIN
The ethical risks of outdated and insecure AI models
AI & Ethics
09:25 MIN
Understanding the security risks of AI-generated code
WeAreDevelopers LIVE – Building on Algorand: Real Projects and Developer Tools
17:12 MIN
Understanding the security risks of AI-generated code
Exploring AI: Opportunities and Risks in Development
20:06 MIN
New security vulnerabilities and monitoring for AI systems
The State of GenAI & Machine Learning in 2025
10:01 MIN
Navigating the new landscape of AI and cybersecurity
From Monolith Tinkering to Modern Software Development
08:03 MIN
Managing security risks of AI-assisted code generation
WWC24 - Chris Wysopal, Helmut Reisinger and Johannes Steger - Fighting Digital Threats in the Age of AI
Featured Partners
Related Videos
Hacking AI - how attackers impose their will on AI
Mirko Ross
Machine Learning: Promising, but Perilous
Nura Kawa
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
Skynet wants your Passwords! The Role of AI in Automating Social Engineering
Wolfgang Ettlinger & Alexander Hurbean
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
GenAI Security: Navigating the Unseen Iceberg
Maish Saidel-Keesing
Staying Safe in the AI Future
Cassie Kozyrkov
Prompt Injection, Poisoning & More: The Dark Side of LLMs
Keno Dreßel
From learning to earning
Jobs that call for the skills explored in this talk.



Internships on hardware/microarchitectural security of deep/machine learning implementations
Inria
Canton of Rennes-4, France
Remote
C++
GIT
Linux
Python
+3



ML Security Tools & Threat Modeling Engineer
NXP Semiconductors
Gratkorn, Austria
API
Python
Machine Learning


Machine Learning Engineer
Speechmatics
Charing Cross, United Kingdom
Remote
€39K
Machine Learning
Speech Recognition

Machine Learning Engineer (AI Core Team)
Manychat
Barcelona, Spain
Intermediate
Python
Docker
PyTorch
FastAPI
PostgreSQL
+3