Abstract: When machine learning models are deployed to production, their performance starts degrading. Now that ML models are increasingly becoming mission critical for enterprises and startups alike, root cause analysis and gaining observability into your AI systems is similarly mission critical. However, many organizations struggle to prevent model performance degradation and assure the quality of the data being fed to their ML models, largely because they don't have the tools and organizational knowledge to do so.
In this talk, MLOps Architect Danny D. Leybzon will explain the problems associated with ML models deployed in production, and how many of these problems can be addressed with data monitoring and AI observability best practices. Taking it a step further, the speaker will discuss steps that data scientists and machine learning engineers can take to proactively ensure the performance of their models, rather than reacting to the impacts of performance degradation reported by their customers.
Bio: Danny D. Leybzon has worn many hats, all of them related to data. He studied computational statistics at UCLA, before becoming first an analyst and then a product manager at a big data platform named Qubole. He went on to be the primary field engineer for data science and machine learning at Imply, before taking on his current role as MLOps Architect at WhyLabs. He has worked to evangelize machine learning best practices, talking on subjects such as distributed deep learning, productionizing machine learning models, automated machine learning, and lately has been talking about AI observability and data logging. When Danny's not researching, practicing, or talking about data science, he's usually doing one of his numerous outside hobbies: rock climbing, backcountry backpacking, skiing, etc.