Abstract: Many potential applications of machine learning involve a human using models to make decisions. Whether they're helping doctors choose between surgery options or helping viewers decide which video to watch next, ML models will increasingly need to indicate not only what they're predicting, but why. The goal of model interpretability is to give users insight into how a model came to a given prediction, so that the user can recognize bias, account for context outside the model, and ultimately decide whether or not to trust it. In this talk, I will argue that in the long run, interpretability will have a much larger impact on whether ML models are adopted in the real world than more traditional factors such as accuracy and scalability.
Bio: Jesse was an Assistant Professor in the math department at Oklahoma State, studying geometric topology and its applications to data analysis before joining Google as a software engineer in 2014. In 2016, he joined the Clinical Modeli
Software Engineer at Google