Abstract: ONNX Runtime is the inference engine for inferencing ML models in different HW environments. Applications of AI are everywhere. This requires ML models that are trained in the cloud to execute on small devices with low power, low compute, and low memory. Such devices are typically used in IoT scenarios. The data captured by these devices are processed before sending the telemetry to the cloud for further actions on the business application. ONNX Runtime has made enhancements to enable execution of ML models in these edge devices to power AI on the edge applications. This session will walk through the workflow to train an image classification model, package in container and deploy to IoT device.
Train ML models for IOT applications, e.g. image classification. Start with a pre-trained model to fine tune for specific IOT scenario. Store model in registry. And convert to ONNX.
Create the IoT application in Python using the ML model with ONNX Runtime. Package this in a docker image for the target device and register image in container registry.
Deploy the container image to the edge device. Run inference sessions and send processed telemetry to cloud for the business application.
- Machine Learning life-cycle.
Bio: Wolfgang Pauli is an AI developer at Microsoft, with 15+ years of experience with Machine Learning and Artificial Intelligence research. He received his Ph.D. in Computational Neuroscience from the University of Colorado and has published numerous high-profile articles in scientific journals on Computational Neuroscience, Reinforcement Learning, and Neural Networks. Before joining the Microsoft AI Platform team in 2018, he was a research scientist at the California Institute of Technology. He supports the democratization of AI by developing open-source solutions that apply recent breakthroughs to real-world problems.