A tutorial about uncertainty estimation with Bayesian Neural Networks at the KI2020
Bayesian Methods can estimate the model uncertainty and uncertainty regarding the input in Neural Networks and make them more robust and precise. Deep Neural Networks produce state-of-the-art results in various fields like natural language and image processing solving tasks such as speech recognition, object detection or object recognition. In contrast to classic Neural Networks, the model parameters of Bayesian Neural Networks (BNNs) are not defined by point estimates, but by probability distributions. Therefore, BNNs are prone to tackle the problem of outlier detection, with which Neural Networks struggle. Thus, they can detect misclassified out-of-distribution input examples and counteract adversarial attacks. This is especially important for safety critical applications in fields like medicine or for autonomous driving.
This tutorial aims to give an introduction and motivation about Neural Networks and uncertainty measurements and then dives deeper into comparing Bayesian Deep Learning approaches.
We will focus on the high-level ideas and the tutorial will be mostly self-contained. No prior knowledge is required. Background in Machine Learning is preferable.
Names and Affiliations
The expected length is 2 x 90 minutes. There will be a break between blocks.
- Introduction & Motivation
- Recap of Neural Networks
- Why is uncertainty measurement in Neural Networks important? Where can it be applied?
- Bayesian vs. Frequentist approach
- What is Bayesian Reasoning?
- What are the challenges when combining Bayesian Methods with Neural Networks?
- Taxonomy of the approaches to measure uncertainty
- Bayesian Deep Learning approaches