A tutorial about uncertainty estimation with Bayesian Neural Networks at the KI2020
Bayesian Methods can estimate the model uncertainty and uncertainty regarding the input in Neural Networks and make them more robust and precise. Deep Neural Networks produce state-of-the-art results in various fields like natural language and image processing solving tasks such as speech recognition, object detection or object recognition. In contrast to classic Neural Networks, the model parameters of Bayesian Neural Networks (BNNs) are not defined by point estimates, but by probability distributions. Therefore, BNNs are prone to tackle the problem of outlier detection, with which Neural Networks struggle. Thus, they can detect misclassified out-of-distribution input examples and counteract adversarial attacks. This is especially important for safety critical applications in fields like medicine or for autonomous driving.
This tutorial aims to give an introduction and motivation about Neural Networks and uncertainty measurements and then dives deeper into comparing Bayesian Deep Learning approaches.
Targeted Audience
We will focus on the high-level ideas and the tutorial will be mostly self-contained. No prior knowledge is required. Background in Machine Learning is preferable.
Names and Affiliations
- Dominik Seuß, Fraunhofer IIS, dominik.seuss@iis.fraunhofer.de
- Andreas Foltyn, Fraunhofer IIS, andreas.foltyn@iis.fraunhofer.de
- Ines Rieger, Fraunhofer IIS, ines.rieger@iis.fraunhofer.de
- Jessica Deuschel, Fraunhofer IIS, jessica.deuschel@iis.fraunhofer.de
Agenda
The expected length is 2 x 90 minutes. There will be a break between blocks.
- Motivation
- Basics of Neural Networks
- Uncertainty Quantification in Neural Networks
- Bayesian Statistics and Approximate Inference
- Bayesian Deep Learning approaches
- Discussion & Future Trends