
Abstract: We’ve spent decades learning how to build traditional software. Whether it’s Windows, mobile, or microservices, developers share a common set of methodologies, best practices, and tools to govern them.
Traditional software development has a roadmap—the Software Development Life Cycle, coalesced around a specific set of tools and processes.
In contrast, machine learning development is a tangle of tools, languages, and infrastructures, with almost no standardization at any point in the process. Manual stopgaps and one-off integrations get models into production but introduce fragility and risk that prevents businesses from trusting them with mission-critical applications.
To build and deploy enterprise-ready models that generate real value, businesses need to standardize on a new stack and a new, ML-focused life cycle.
This talk will cover:
- Key differences between ML and traditional software development
- Where the SDLC works with ML, and where it breaks down
- An overview of the new ML stack, from training to deployment to production
- The five biggest infrastructure and process mistakes ML teams commit
- How successful early movers have succeeded, and lessons you can use today
Bio: Kenny Daniel is founder and CTO of Algorithmia. He came up with the idea for Algorithmia while working on his PhD and seeing the plethora of algorithms that never saw the light of day.
In response, he built the Algorithmia Cloud AI Layer, which has helped more than 80,000 developers share, pipeline, and consume more than 7000 models. Through his work with hundreds of companies implementing ML, he then created the Enterprise AI Layer, which helps the largest organizations in the world deploy, connect, manage, and secure machine learning operations at scale.
Kenny holds degrees from Carnegie Mellon University and the University of Southern California, where he studied artificial intelligence and mechanism design.