
Abstract: Understanding why a model made a certain prediction is crucial in many applications. However, with large modern datasets the best accuracy is often achieved by complex models even experts struggle to interpret, such as ensemble or deep learning models. This creates a tension between accuracy and interpretability. In response, a variety of methods have recently been proposed to help users interpret the predictions of complex models. Here, we present a unified framework for interpreting predictions, namely SHAP (SHapley Additive exPlanations), which assigns each feature an importance for a particular prediction. SHAP comes with strong theoretical guarantees and is applicable to any model.
Using SHAP we present strict improvements to both LIME (a popular model agnostic method), and to feature attribution in tree ensemble methods (such as gradient boosting trees or random forests). Current attribution methods can be inconsistent, which means changing the model to rely more on a given feature can actually decrease the importance assigned to that feature. In contrast, SHAP values are guaranteed to always be consistent and locally accurate. Since SHAP strictly improves on the current state-of-the-art, it impacts any current user of tree ensemble methods, or model agnostic explanation methods.
Bio: Scott Lundberg is a Ph.D. candidate at the University of Washington's Paul Allen School of Computer Science and Engineering, working with Professor Su-In Lee at the intersection of machine learning and health/biology. Before coming to UW he received his B.S and M.S from Colorado State University in 2008, and then worked as a research scientist for five years with Numerica Corporation. Scott is currently supported by a NSF Graduate Research Fellowship, and is seeking to improve health and medicine through AI.

Scott Lundberg
Title
PhD Student at the University of Washington CSE Dept
Category
west2017talks
