Abstract: “Interpretability is the degree to which a human can understand the cause of a decision.” – Miller.
A major disadvantage of using machine learning to solve business problems in today’s world is that insights about the data and the task the machine solves is hidden in increasingly complex models. By default, machine learning models pick up biases from the training data. This can turn machine learning models into racists that discriminate against underrepresented groups. Interpretability acts as a useful debugging tool for detecting such biases in machine learning models. These models can only be debugged and audited when they can be interpreted. The higher the interpretability of a machine learning model, the easier it is for someone to comprehend why certain decisions or predictions have been made. An ideal model is said to be better interpretable than another model if its decisions are easier for a human to comprehend than decisions from the other model.
The wider goal of Ai is to gain knowledge, but many problems are solved with big datasets and black box machine learning models. The model itself becomes the source of knowledge instead of the data. Interpretability makes it possible to extract this additional knowledge captured by the model. Humans most of the time usually do not ask why a certain prediction was made, but why this prediction was made instead of another prediction. We tend to think in counterfactual cases, i.e. “How would the prediction have been if input X had been different?”. For a house price prediction, the house owner might be interested in why the predicted price was high compared to the lower price they had expected. If my loan application is rejected, I do not care to hear all the factors that generally speak for or against a rejection. I am interested in the factors in my application that would need to change to get the loan. I want to know the contrast between my application and the would-be-accepted version of my application.
This tutorial is not an advertisement for the methods, but it should help you decide whether a method works well for your application or not. This tutorial is going to cover the following topics.
- Intrinsically interpretable model’s vs post hoc (and model-agnostic) interpretation methods. [Short Intro]
- Global Model Agnostics Methods vs Local Model Agnostics Methods. [LIME, Shapely] [Short Into]
- Using Counterfactual Explanations that are truthful to the model and yet interpretable to people. [Main Stay]
- Practical examples Showing usage of Diverse Counterfactual Explanations (DiCE) applied to ML. [Main Stay]
Bio: Jeet is an experienced Data Science professional carrying 7+years of applied Industrial experience in Machine Learning, Deep Learning, Computer Vision & Natural Language Processing across multiple domains. He is right now working as Manager - Data Science in Analytics & Innovation (Ai) team at United Airlines, one of the major American airlines to solve Aviation analytics problems. Within his ~4 years at United he has worked on designing and architecting Computer Vision & NLP powered capabilities to save thousands of repetitive man hours & Millions of dollars in savings. In past he has worked with Tata Consultancy Services in its Analytics and insights(A&I) unit to build analytics capabilities across multiple domains. He follows various MOOCs and research communities, and his curiosity keeps pushing him to learn and explore more. A firm believer in giving back to the community and gives webinars/career coaching and 1-1 mentorship sessions on his journey to data science, the latest trends in the industry, and general advice to beginners and aspirants to help propel their journey into data science.