If We Want AI to be Interpretable, We Need to Measure Interpretability

Abstract: 

AI tools are ubiquitous, but most users treat it as a black box: a handy tool that suggests purchases, flags spam, or autocompletes text. While researchers have presented explanations for making AI less of a black box, a lack of metrics make it hard to optimize explicitly for interpretability. Thus, I propose two metrics for interpretability suitable for unsupervised and supervised AI methods.

For unsupervised topic models, I discuss our proposed ""intruder"" interpretability metric, how it contradicts the previous evaluation metric for topic models (perplexity), and discuss its uptake in the community over the last decade. For supervised question answering approaches, I show how human-computer cooperation can be measured and directly optimized by a multi-armed bandit approach to learn what kinds of explanations help specific users. I will then briefly discuss how similar setups can help users navigate information-rich domains like fact checking, translation, and web search.

Bio: 

Jordan is an associate professor in the University of Maryland Computer Science Department (tenure home), Institute of Advanced Computer Studies, iSchool, and Language Science Center. Previously, he was an assistant professor at Colorado's Department of Computer Science (tenure granted in 2017). He was a graduate student at Princeton with David Blei.

His research focuses on making machine learning more useful, more interpretable, and able to learn and interact from humans. This helps users sift through decades of documents; discover when individuals lie, reframe, or change the topic in a conversation; or to compete against humans in games that are based in natural language.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google