Abstract: Machine Learning (ML) is already redefining the way we build and interact with applications, but managing that transformation is tough.
What’s so different about ML models?
- They require short spikes of massive compute
- They are often written in different languages than your core code
- Data Scientists use dozens of different frameworks
- Different models require different hardware resources
That’s a lot of work with no clear owner. DevOps and Engineering have little understanding of the ML tools and architecture, and Data Scientists aren’t Ops Engineers or Developers.
The biggest, most aggressive companies in the world have built in-house systems to automate and optimize model deployment. What can we learn from those large-scale implementations, and how can we adapt those lessons to organizations with finite resources?
Drawing on this his experience with hundreds of ML implementations and thousands of models deployed by companies of all sizes, Brendan Collins will discuss the lessons we can learn from early adopters’ successes and failures, suggest priorities for deployment implementation, and outline the strategic and technical hurdles each company must overcome to scale ML.
Bio: Brendan Collins is the Solutions Engineer for Algorithmia’s east coast enterprise customers. Previously, he held a similar position at Synology. He has worked in financial enterprise infrastructure for more than 10 years, with groups ranging in size from the largest financial institutions in the world to community banks. Brendan has a true passion for helping enterprises use machine learning and data science to solve cutting edge problems.