Interpretability and the Future of Machine Learning
Interpretability and the Future of Machine Learning


Many potential applications of machine learning involve a human using models to make decisions. Whether they're helping doctors choose between surgery options or helping viewers decide which video to watch next, ML models will increasingly need to indicate not only what they're predicting, but why. The goal of model interpretability is to give users insight into how a model came to a given prediction, so that the user can recognize bias, account for context outside the model, and ultimately decide whether or not to trust it. In this talk, I will argue that in the long run, interpretability will have a much larger impact on whether ML models are adopted in the real world than more traditional factors such as accuracy and scalability.


Jesse was an Assistant Professor in the math department at Oklahoma State, studying geometric topology and its applications to data analysis before joining Google as a software engineer in 2014. In 2016, he joined the Clinical Modeli

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google