Abstract: Machine learning is usually taught from tutorials using small, clean datasets put into data-frames and orchestrated with Jupyter notebooks; all done in one, in-memory, local environment. This is a fine style for presenting a new topic and teaching the main ideas, but unfortunately, these patterns are not conducive to the delivery of real production applications at scale. Real industrial situations involve multiple environments and data sets from databases or other data stores rather than file-based input. They interact with live production systems and must be coordinated with software delivery teams and product owners. They must be production quality, with good design, well-tested and maintainable. This often results in data scientists having to choose between the environment that they are used to, and one that is suitable for delivery to production; and an awkward migration from one to the other. In this workshop, we show how to maintain data science productivity as well as collaborate effectively and deliver value continuously. We guide participants through CI/CD practices for machine learning and a pattern of working that avoids most of the common pitfalls.
The training and instructions can be found in https://github.com/ThoughtWorksInc/CD4ML-Scenarios
First an introduction to MLOps and how we approach it through what we call Continuous Delivery for Machine Learning. We'll introduce how we've chosen this problem and put together the workshop as an example of how to apply MLOps principles.
Part 1 - System setup
The workshop involves various tools running in docker. The workshop also requires working in a forked github repository and using a personal access token. We'll walk you through this and at the end of this part you will have your docker and github setup completed.
Part 2 - Jenkins setup
Here you'll setup and configure a deployment pipeline to build and deploy the application to production.
Part 3 - Machine Learning on the Zillow Housing Problem
Learn to do experiments without interrupting the rest of your team or changing the production model. Learn about the codebase design and how it enables flexibility while maintaining reproducibility.
Part 4 - Continuous Delivery
Learn about the principles of Continuous Deployment. Demonstrate a CD quality check, ensuring that our changes don't impact production applications.
Part 5 - Model Monitoring and Observability
Configure and deploy our application to log prediction events to Elastic Search. Visualize events on Kibana dashboard. Learn how to close the data feedback loop.
Docker must be installed locally and it must be possible to allocate it 4GB RAM (workshop shows how to do this). Github account is required. Must be able to fork the repo https://github.com/ThoughtWorksInc/CD4ML-Scenarios
Bio: Ryan Dawson is a technologist passionate about data. Ryan works with clients on large-scale data and AI initiatives, helping organizations get more value from data. His work includes strategies to productionize machine learning, organizing the way data is captured and shared, selecting the right data technologies and optimal team structures, as well as writing the code to make it happen. He has over 15 years of experience and, as well as many widely read articles about MLOps, software design, and delivery. is author of the Thoughtworks Guide to Evaluating MLOps Platforms.