Tillman Radmer & Fabian Hüger & Nico Schmidt
Uncertainty Estimation of Neural Networks
#1about 5 minutes
Understanding uncertainty through rare events in driving
Neural networks are more uncertain in rare situations like unusual vehicles on the road because these events are underrepresented in training data.
#2about 3 minutes
Differentiating aleatoric and epistemic uncertainty
Uncertainty is classified into two types: aleatoric (data noise, like blurry edges) and epistemic (model knowledge gaps), which can be reduced with more data.
#3about 3 minutes
Why classification scores are unreliable uncertainty metrics
Neural network confidence scores are often miscalibrated, showing overconfidence at high scores and underconfidence at low scores, making them poor predictors of true accuracy.
#4about 2 minutes
Using a simple alert system to predict model failure
The alert system approach uses a second, simpler model trained specifically to predict when the primary neural network is likely to fail on a given input.
#5about 15 minutes
Using Monte Carlo dropout and student networks for estimation
The Monte Carlo dropout method estimates uncertainty by sampling predictions, and its performance can be accelerated by training a smaller student network to mimic this behavior.
#6about 14 minutes
Applying uncertainty for active learning and corner case detection
An active learning framework uses uncertainty scores to intelligently select the most informative data (corner cases) from vehicle sensors for labeling and retraining models.
#7about 4 minutes
Challenges in uncertainty-based data selection strategies
Key challenges for active learning include determining the right amount of data to select, evaluating performance on corner cases, and avoiding model-specific data collection bias.
#8about 7 minutes
Addressing AI safety and insufficient generalization
Deep neural networks in autonomous systems pose safety risks due to insufficient generalization, unreliable confidence, and brittleness to unseen data conditions.
#9about 8 minutes
Building a safety argumentation framework for AI systems
A safety argumentation process involves identifying DNN-specific concerns, applying mitigation measures like uncertainty monitoring, and providing evidence through an iterative, model-driven development cycle.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
00:11 MIN
Finding the unknown unknowns in autonomous driving
Finding the unknown unknowns: intelligent data collection for autonomous driving development
04:42 MIN
Understanding the long-tail problem in driving scenarios
Finding the unknown unknowns: intelligent data collection for autonomous driving development
21:41 MIN
Challenge three: Ensuring machine learning models are robust
How Machine Learning is turning the Automotive Industry upside down
07:33 MIN
How INSTINCT software identifies valuable data
Finding the unknown unknowns: intelligent data collection for autonomous driving development
14:21 MIN
Q&A on ethics, model deployment, and regional data
Finding the unknown unknowns: intelligent data collection for autonomous driving development
22:45 MIN
Applying machine learning in the automotive industry
Getting Started with Machine Learning
44:34 MIN
Common pitfalls and solutions for neural networks
Overview of Machine Learning in Python
30:30 MIN
Q&A on model reliability and explainable AI
Getting Started with Machine Learning
Featured Partners
Related Videos
Finding the unknown unknowns: intelligent data collection for autonomous driving development
Liang Yu
What non-automotive Machine Learning projects can learn from automotive Machine Learning projects
Jan Zawadzki
Intelligent Data Selection for Continual Learning of AI Functions
Nico Schmidt
How AI Models Get Smarter
Ankit Patel
How computers learn to see – Applying AI to industry
Antonia Hahn
Developing an AI.SDK
Daniel Graff & Andreas Wittmann
How Machine Learning is turning the Automotive Industry upside down
Jan Zawadzki
Enhancing AI-based Robotics with Simulation Workflows
Teresa Conceicao
From learning to earning
Jobs that call for the skills explored in this talk.




Working Student Machine Learning Based Environment Perception for ADAS
BMW AG
Python
Computer Vision
Machine Learning
Amazon Web Services (AWS)





Nachwuchswissenschaftler/in für den sicheren Einsatz von Machine Learning in industriellen Anwendungen
Bundesanstalt für Arbeitsschutz und Arbeitsmedizin
GIT
Python
Docker
PyTorch
TensorFlow
+1