Opening The Black Box – Interpretability In Deep Learning
Opening The Black Box – Interpretability In Deep Learning

Abstract: 

The recent application of deep neural networks to long-standing problems has brought a break-through in performance and prediction power. However, high accuracy often comes at the price of loss of interpretability, i.e. many of these models are black-boxes that fail to provide explanations on their predictions. This tutorial focuses on illustrating some of the recent advancements in the field of interpretable artificial intelligence. We will show some common techniques that can be used to explain predictions on pretrained models and that can be used to shed light on their inner mechanisms. The tutorial is aimed to strike the right balance between theoretical input and practical exercises. The tutorial has been designed to provide the participants not only with the theory behind deep learning interpretability, but also to offer a set of frameworks, tools and real-life examples that they can implement in their own projects.

Bio: 

Joris is a Data Scientist in the IBM Research Zürich Lab (Rüschlikon, Switzerland). He joined the Cognitive Health Care and Life Sciences group initially for his master thesis in Computational Biology & Bioinformatics, a joint degree ETH Zürich and University Zürich, working on multiple kernel learning.
His research interests include multimodal learning approaches, relational learning via GNNs and NLP.
He’s currently involved in different works focused on machine learning for precision medicine in the context of a H2020 EU project, iPC, with the goal of developing models for pediatric tumors.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google