Abstract: The benefits of Real-Time Machine Learning are becoming increasingly apparent. Digital native companies have long proven that use cases like fraud detection, recommendation systems, and dynamic pricing all benefit from lower latencies. In a recent KDD paper*, Booking.com found that even a 30% increase in model serving latency caused a .5% decrease in user conversion, a significant cost to their business.
While Real-Time Machine Learning presents a game-changing opportunity, few data teams are doing real-time serving in production, struggling to deliver the feature freshness needed for low-latency inference.
Real-Time Machine Learning has yet to reach its potential because of the deep disconnect between data engineering and data science. Historically, our industry has perceived streaming as a complex technology reserved for experienced data engineers with a deep understanding of incremental ingestion patterns. But now, modern streaming platforms make it much easier for anyone to build reliable streaming pipelines, regardless of their streaming background.
Using an example fraud detection scenario, you’ll learn:
- Three important patterns for real-time model inference
- How to prioritize the most common real-time ML use cases in your business
- How to evaluate streaming tools, and why streaming is valuable at any latency
- Operational concerns like monitoring, drift detection, and feature stores
Who should attend: data scientists regardless of streaming experience, and data engineers regardless of ML experience.
Bio: Avinash Sooriyarachchi is a Senior Solutions Architect at Databricks. His current work involves working with large Retail and Consumer Packaged Goods organizations across the United States and enabling them to build Machine Learning based systems. His specific interests include streaming machine learning systems and building applications leveraging foundation models. Avi holds a Master’s degree in Mechanical Engineering and Applied Mechanics from the University of Pennsylvania.