Abstract: A key step in the data science workflow is rapid model development in order to create, test, and identify the best models to put into production. However, large gaps exist in this workflow, and the data science tool set is rapidly changing to fill those gaps. Large teams and enterprises are quickly moving from using individual siloed notebooks like Zeppelin and Jupyter to wanting to share and reuse models, code, and results. Challenges also exist in deploying models into production and model serving using tools like Kubeflow and TensorFlow. Moon Soo Lee and Louis Huard explore real-world examples of how companies are solving these problems, and how you can use these best practices in your own workflow.
What you'll learn
Learn how companies are solving the problem of the gaps in the data science workflow.
Bio: Coming Soon!