Tillman Radmer & Fabian Hüger & Nico Schmidt

Uncertainty Estimation of Neural Networks

How can a neural network know what it doesn't know? Discover how uncertainty estimation creates a critical safety net for autonomous driving.

Uncertainty Estimation of Neural Networks
#1about 5 minutes

Understanding uncertainty through rare events in driving

Neural networks are more uncertain in rare situations like unusual vehicles on the road because these events are underrepresented in training data.

#2about 3 minutes

Differentiating aleatoric and epistemic uncertainty

Uncertainty is classified into two types: aleatoric (data noise, like blurry edges) and epistemic (model knowledge gaps), which can be reduced with more data.

#3about 3 minutes

Why classification scores are unreliable uncertainty metrics

Neural network confidence scores are often miscalibrated, showing overconfidence at high scores and underconfidence at low scores, making them poor predictors of true accuracy.

#4about 2 minutes

Using a simple alert system to predict model failure

The alert system approach uses a second, simpler model trained specifically to predict when the primary neural network is likely to fail on a given input.

#5about 15 minutes

Using Monte Carlo dropout and student networks for estimation

The Monte Carlo dropout method estimates uncertainty by sampling predictions, and its performance can be accelerated by training a smaller student network to mimic this behavior.

#6about 14 minutes

Applying uncertainty for active learning and corner case detection

An active learning framework uses uncertainty scores to intelligently select the most informative data (corner cases) from vehicle sensors for labeling and retraining models.

#7about 4 minutes

Challenges in uncertainty-based data selection strategies

Key challenges for active learning include determining the right amount of data to select, evaluating performance on corner cases, and avoiding model-specific data collection bias.

#8about 7 minutes

Addressing AI safety and insufficient generalization

Deep neural networks in autonomous systems pose safety risks due to insufficient generalization, unreliable confidence, and brittleness to unseen data conditions.

#9about 8 minutes

Building a safety argumentation framework for AI systems

A safety argumentation process involves identifying DNN-specific concerns, applying mitigation measures like uncertainty monitoring, and providing evidence through an iterative, model-driven development cycle.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

From learning to earning

Jobs that call for the skills explored in this talk.