Abstract: Deep learning-based models have tremendous potential to have an impact on socially-relevant problems like medical diagnosis. At the same time, safety critical applications must carefully balance potential harms and benefits and should only be deployed if they provide a net benefit.
This tutorial will introduce the glossary of uncertainty quantification relevant for deep learning and contextualise which aspects are most important in order to ease model deployment. While the identification of difficult samples for collaborative approaches between human and AI can be very successful in-domain, the reliable detection of out-of-distribution samples via uncertainty remains an active field of research. We'll provide an overview of promising recent developments and end with building a neural network that knows when it does not know - in a simple setting and for illustrative purposes.
The session will consist of two parts:
1. A talk that introduces to the topic of uncertainty in the context of safety-critical industrial ML applications and gives an overview about the current state of the field.
2. A short demo of building a neural network with uncertainty in a simple setting. This part can be followed along much like the talk, but the code will also provided.
Part 1: Familiarity with deep learning, in particular classification problems
Part 2: Familiarity with Python & a DL framework such as Tensorflow or Pytorch
Bio: Christian Leibig is Director of Machine Learning at Vara, leading the development of methods from research to production. He obtained a Ph.D. in Neural Information Processing from the International Max Planck Research School in Tübingen and a diploma in physics from the University of Konstanz. Before joining Vara, he worked as a Postdoctoral Researcher at the University Clinics in Tübingen on the applicability of Bayesian Deep Learning and machine learning applications for the healthcare space for ZEISS and held research and internship positions with Max Planck, LMU Munich and the Natural and Medical Sciences Institute in Reutlingen. The method and software of his PhD work, an unsupervised solution for neural spike sorting from HDCMOS-MEA data is distributed by Multichannel Systems (Harvard Bioscience). His work on applying and assessing uncertainty methods to large scale medical imaging was among the first in the field and awarded with key note speaker invitations. He enjoys all of theory, software engineering, and people management, in particular for applications that have a meaningful impact, such as diagnosing cancer early.
Christian Leibig, PhD
Director of Machine Learning | Vara