Introducing Model Validation Toolkit

Abstract: 

Surrounding a typical ML pipeline many details are commonly swept under the rug. How will we monitor production data for concept drift? How do we measure false negative rate in production? How confident can we be of our performance assessments with a small test set and how should they be modified when faced with biased data? How can we ensure our model follows reasonable assumptions? We introduce a new general purpose tool, the Model Validation Toolkit, for common tasks involved in model validation, interpretability, and monitoring. Our utility has submodules and accompanying tutorials on measuring concept drift, assigning and updating optimal thresholds, determining credibility of performance metrics, compensating for data bias, and performing sensitivity analysis. In this session, we will give a tour of the framework's core functionality and some associated use cases.

Bio: 

Alex is a data scientist at FINRA. He applies machine learning and statistics to identify anomalous and suspicious trading and has helped to develop model validation procedures and tools. Alex originally studied physics and is passionate about applying math to solve real world problems. He previously worked as a data engineer and as a software engineer.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google