
Abstract: Surrounding a typical ML pipeline many details are commonly swept under the rug. How will we monitor production data for concept drift? How do we measure false negative rate in production? How confident can we be of our performance assessments with a small test set and how should they be modified when faced with biased data? How can we ensure our model follows reasonable assumptions? We introduce a new general purpose tool, the Model Validation Toolkit, for common tasks involved in model validation, interpretability, and monitoring. Our utility has submodules and accompanying tutorials on measuring concept drift, assigning and updating optimal thresholds, determining credibility of performance metrics, compensating for data bias, and performing sensitivity analysis. In this session, we will give a tour of the framework's core functionality and some associated use cases.
Bio: Alex is a data scientist at FINRA. He applies machine learning and statistics to identify anomalous and suspicious trading and has helped to develop model validation procedures and tools. Alex originally studied physics and is passionate about applying math to solve real world problems. He previously worked as a data engineer and as a software engineer.