Introducing Model Validation Toolkit

Abstract: 

Surrounding a typical ML pipeline many details are commonly swept under the rug. How will we monitor production data for concept drift? How do we measure false negative rate in production? How confident can we be of our performance assessments with a small test set and how should they be modified when faced with biased data? How can we ensure our model follows reasonable assumptions? We introduce a new general purpose tool, the Model Validation Toolkit, for common tasks involved in model validation, interpretability, and monitoring. Our utility has submodules and accompanying tutorials on measuring concept drift, assigning and updating optimal thresholds, determining credibility of performance metrics, compensating for data bias, and performing sensitivity analysis. In this session, we will give a tour of the framework's core functionality and some associated use cases.

Bio: 

Matthew Gillett is an Associate Director at FINRA who manages a team of Software Development Engineers in Test (SDET) across multiple projects. In addition to his primary focus in software development and assurance engineering, he also has an interest in various other technology topics such as big data processing, machine learning, and blockchain.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google