Balázs Kiss
A hundred ways to wreck your AI - the (in)security of machine learning systems
#1about 4 minutes
The security risks of AI-generated code
AI systems can generate code quickly but may introduce vulnerabilities or rely on outdated practices, highlighting that all AI systems are fundamentally code and can be exploited.
#2about 5 minutes
Fundamental AI vulnerabilities and malicious misuse
AI systems are prone to classic failures like overfitting and can be maliciously manipulated through deepfakes, chatbot poisoning, and adversarial patterns.
#3about 1 minute
Exploring threat modeling frameworks for AI security
Several organizations like OWASP, NIST, and MITRE provide threat models and standards to help developers understand and mitigate AI security risks.
#4about 6 minutes
Deconstructing AI attacks from evasion to model stealing
Attack trees categorize novel threats like evasion with adversarial samples, data poisoning to create backdoors, and model stealing to replicate proprietary systems.
#5about 2 minutes
Demonstrating an adversarial attack on digit recognition
A live demonstration shows how pre-generated adversarial samples can trick a digit recognition model into misclassifying numbers as zero.
#6about 5 minutes
Analyzing supply chain and framework security risks
Security risks extend beyond the model to the supply chain, including backdoors in pre-trained models, insecure serialization formats like Pickle, and vulnerabilities in ML frameworks.
#7about 1 minute
Choosing secure alternatives to the Pickle model format
The HDF5 format is recommended as a safer, industry-standard alternative to Python's insecure Pickle format for serializing machine learning models.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
03:07 MIN
The dual nature of machine learning's power
Machine Learning: Promising, but Perilous
00:04 MIN
Understanding the current state of AI security challenges
Delay the AI Overlords: How OAuth and OpenFGA Can Keep Your AI Agents from Going Rogue
13:54 MIN
The ethical risks of outdated and insecure AI models
AI & Ethics
17:12 MIN
Understanding the security risks of AI-generated code
Exploring AI: Opportunities and Risks in Development
08:03 MIN
Managing security risks of AI-assisted code generation
WWC24 - Chris Wysopal, Helmut Reisinger and Johannes Steger - Fighting Digital Threats in the Age of AI
19:57 MIN
How AI coding assistants impact developer skills
Navigating the Future of Junior Developers in Tech
25:33 MIN
AI privacy concerns and prompt engineering
Coffee with Developers - Cassidy Williams -
14:10 MIN
Managing the fear, accountability, and risks of AI
Collaborative Intelligence: The Human & AI Partnership
Featured Partners
Related Videos
Machine Learning: Promising, but Perilous
Nura Kawa
Hacking AI - how attackers impose their will on AI
Mirko Ross
Skynet wants your Passwords! The Role of AI in Automating Social Engineering
Wolfgang Ettlinger & Alexander Hurbean
Staying Safe in the AI Future
Cassie Kozyrkov
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
How AI Models Get Smarter
Ankit Patel
The AI Elections: How Technology Could Shape Public Sentiment
Martin Förtsch & Thomas Endres
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
Sebastian Schrittwieser
From learning to earning
Jobs that call for the skills explored in this talk.
Security-by-Design for Trustworthy Machine Learning Pipelines
Association Bernard Gregory
Machine Learning
Continuous Delivery
Internships on hardware/microarchitectural security of deep/machine learning implementations
Inria
Canton of Rennes-4, France
Remote
C++
GIT
Linux
Python
+3
Machine Learning Engineer
InteractiveAI
Municipality of Madrid, Spain
€60-120K
Intermediate
Azure
Spark
Python
PyTorch
+5


