
Abstract: Explainable ML and AI (also known as XAI) is not only a booming field of research but is also widely applied across industries, such as healthcare, finance, and insurance, among others. There are a lot of approaches out there that provide certain level of explainability, as well as different packages and libraries.
In this workshop, we will introduce (some of the more common and promising) approaches for ML explainability. We will also get a hands-on experience with the application of different XAI libraries, and get a feeling about their shortcomings and advantages.
Session Outline
In the workshop, we will use a classic machine learning dataset and explain the decision and predictions of a black-box model. We will start with a brief theoretical introduction of the different approaches to explainability and what each of them is best suited for. The majority of the session will be a hands-on demonstration of many of these approaches.
In the first part, we will talk about visual methods to explain a model. We will construct partial dependency plots and individual conditional expectations. They are a valuable and quite intuitive way to gain an initial understanding of the model behavior.
In the second part, we will look at a couple of approaches to explain the global model behavior, such as feature permutations and global surrogate models. We will discuss in what settings they are best applicable, as well as highlight some limitations.
Last but not least, in the third part, we will walk through some local approaches to explainability. These methods attempt to explain the decision of the model for each instance in a data set and are therefore in high demand in many business settings and domains. We will compare the performance of popular tools for these such as LIME, anchors and shapley values.
The landscape of XAI is developing rapidly and this is just a selection of some of the more promising and popular current approaches and packages. By the end of the session, you will have a good grasp of what we mean by explainability, what ways there are to provide explanations of models, and most of all, what are some more mature and stable packages you can use to explain your machine learning models in Python.
Background Knowledge
Intermediate knowledge of Python, at least beginner knowledge of machine learning, no special knowledge of the tools which will be demonstrated, i.e. no prior knowledge of explainability techniques.
Bio: Violeta has been working as a data scientist in the Data Innovation and Analytics department in ABN AMRO bank located in Amsterdam, the Netherlands. In her daily job, she works on projects with different business lines applying the latest machine learning and advanced analytics technologies and algorithms. Before that, she worked for about 1.5 years as a data science consultant in Accenture, the Netherlands. Violeta enjoyed helping clients solve their problems with the use of data and data science but wanted to be able to develop more sophisticated tools, therefore the switch. Before her position at Accenture, she worked on her PhD, which she obtained from Erasmus University, Rotterdam in the area of Applied Microeconometrics. In her research, she used data to investigate the causal effect of negative experiences on human capital, education, problematic behavior and crime commitment.

Violeta Misheva, PhD
Title
Data Scientist | ABN AMRO Bank N.V.
