Abstract: The recent application of deep neural networks to long-standing problems has brought a break-through in performance and prediction power. However, high accuracy often comes at the price of loss of interpretability, i.e. many of these models are black-boxes that fail to provide explanations on their predictions. This tutorial focuses on illustrating some of the recent advancements in the field of interpretable artificial intelligence. We will show some common techniques that can be used to explain predictions on pretrained models and that can be used to shed light on their inner mechanisms. The tutorial is aimed to strike the right balance between theoretical input and practical exercises. The tutorial has been designed to provide the participants not only with the theory behind deep learning interpretability, but also to offer a set of frameworks, tools and real-life examples that they can implement in their own projects.
Bio: Joris is a Data Scientist in the IBM Research Zürich Lab (Rüschlikon, Switzerland). He joined the Cognitive Health Care and Life Sciences group initially for his master thesis in Computational Biology & Bioinformatics, a joint degree ETH Zürich and University Zürich, working on multiple kernel learning.
His research interests include multimodal learning approaches, relational learning via GNNs and NLP.
He’s currently involved in different works focused on machine learning for precision medicine in the context of a H2020 EU project, iPC, with the goal of developing models for pediatric tumors.