Interpretability In Statistical Modeling
Interpretability In Statistical Modeling


This talk ventures into the nascent field of interpretable machine learning. Predictive models have begun to aid human decisions in a variety of domains. The recent rise of deep learning is increasingly pushing the boundaries of accuracy such models can achieve. At the same time, these deep learning systems have also brought the notion of models-as-black-boxes to the forefront. A major hurdle in their increased adoption is the challenge of providing human-interpretable predictions. There are some domains that have clear requirements around model explanation while there are others where interpretability is more about model diagnostics. Besides, there are prominent forces from outside the field of machine learning (eg the general data protection regulation in the EU, the new York City council's law on automated decision systems etc) which necessitate a discussion on the topic. We are going to discuss the following -
What is the need and scope of interpretability in statistical models?
Is there a possible common ground among the many interpretations of interpretability?
What are some issues and concerns around this topic?
We shall also present a quick survey of existing tools and techniques to create interpretable models and future directions and desiderata for interpretability.


Sneha Jha is a Senior Researcher at Nuance Communications and works at the intersection of natural language processing, machine learning and healthcare. At Nuance, she primarily works on clinical NLP, information extraction, interpretability of statistical models and knowledge engineering for rule-based expert systems. She has a keen interest in the role of technology in policy, law and ethics.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google