Explainable AI – Methods, Applications & Recent Developments
Explainable AI – Methods, Applications & Recent Developments


In recent years, machine learning (ML) has become a key enabling technology for the sciences and industry. Especially through improvements in methodology, the availability of large databases and increased computational power, today's ML algorithms are able to achieve excellent performance (at times even exceeding the human level) on an increasing number of complex tasks. Deep learning models are at the forefront of this development. However, due to their nested nonlinear structure, these powerful models have been generally considered "black boxes", not providing any information about what exactly makes them arrive at their predictions. Since in many applications, e.g., in the medical domain, such lack of transparency may be not acceptable, the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This talk will discuss methods, applications and recent developments in this emerging field of research, in particular it will demonstrate the effectivity of explanation techniques such as Layer-wise Relevance Propagation (LRP) when applied to various datatypes (images, text, audio, video, EEG/fMRI signals) and neural architectures (ConvNets, LSTMs). LRP provides information about individual predictions, e.g., heatmaps visualizing which pixels have been most relevant for the model to arrive at its decision. This helps to verify the predictions and establish trust in the correct functioning on the system. Furthermore, the talk will present a type of explanation which goes beyond the analysis of individual predictions towards a more general understanding of model behaviour. A recently proposed method, spectral relevance analysis (SpRAy), computes such meta explanations by clustering individual LRP heatmaps. This approach allows to investigate the predictions strategies of the classifier on the whole dataset in a (semi-)automated manner and to systematically find weak points in models or training datasets. The talk will finish with a discussion of challenges and open questions in the field of explainable AI.


Wojciech Samek has founded and is heading the Machine Learning Group at Fraunhofer Heinrich Hertz Institute since 2014. He studied computer science at Humboldt University of Berlin, Heriot-Watt University and University of Edinburgh from 2004 to 2010 and received the Dr. rer. nat. degree with distinction (summa cum laude) from the Technical University of Berlin in 2014. In 2009 he was visiting researcher at NASA Ames Research Center, Mountain View, CA, and in 2012 and 2013 he had several short-term research stays at ATR International, Kyoto, Japan. He was awarded scholarships from the European Union's Erasmus Mundus programme, the German National Academic Foundation and the DFG Research Training Group GRK 1589/1. He is associated with the Berlin Big Data Center and the Berlin Center for Machine Learning, is an editorial board member of Digital Signal Processing and PLOS ONE, and was organizer of various deep learning workshops. He received the best paper prize at the ICML'16 Workshop on Visualization for Deep Learning and has authored more than 100 journal and conference papers, predominantly in the areas deep learning, interpretable machine learning, robust signal processing and computer vision.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google