Abstract: After you have spent time gathering data, training an ML model, and deploying it into production, your job is done, right? Not quite. To ensure the success of your model in production, it's not enough to train an excellent model once. You need to be constantly monitoring your model–including on different segments of data–to both maintain and continuously improve model performance.
In this session, MLOps Architect Danny D. Leybzon will introduce the audience to the cutting edge Data+AI Observability platform WhyLabs. With WhyLabs, users can not only monitor their models' performance in production, but also gain observability into the ML system, enabling them to improve the performance of deployed models. By understanding both theoretical and hands-on explanations of monitoring and observability, the audience will come away having learned about how to ensure that models in production stay performant.
Bio: Danny D. Leybzon has worn many hats, all of them related to data. He studied computational statistics at UCLA and has worked in the data and ML space ever since. In his role as MLOps architect, he has worked to evangelize machine learning best practices, talking on subjects such as distributed deep learning, productionizing machine learning models, automated machine learning, and lately has been talking about AI observability and data logging. When Danny's not researching, practicing, or talking about data science, he's usually doing one of his numerous outside hobbies: rock climbing, backcountry backpacking, skiing, etc.