Instilling Interpretability and Explainability into AI Projects


As AI becomes more prevalent in our everyday lives, it is our responsibility to make sure that these systems are built in an ethical fashion. One component of responsible AI is a proactive approach to interpretability and explainability at all phases of an AI project’s lifecycle.

In this presentation, we will dive into different phases of a machine learning project and highlight features that promote interpretability and explainability. Specifically, we will look at the data preparation and processing, modeling, and continuous monitoring phases of the lifecycle and show how practitioners or organizations can build features to empower their data science teams.


Scott Reed is an applied AI ethicist at DataRobot. He has a background in applied information technology and international relations and has worked in different capacities in and around data his whole career. Prior to DataRobot, he worked as a data scientist at Fannie Mae. He is passionate about solving complex business problems using advanced Data Science techniques and finding insights that produce effective outcomes.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google