Abstract: ONNX Runtime is the inference engine for inferencing ML models in different HW environments. Applications of AI are everywhere. This requires ML models that are trained in the cloud to execute on small devices with low power, low compute, and low memory. Such devices are typically used in IoT scenarios. The data captured by these devices are processed before sending the telemetry to the cloud for further actions on the business application. ONNX Runtime has made enhancements to enable execution of ML models in these edge devices to power AI on the edge applications. This session will walk through the workflow to train an image classification model, package in container and deploy to IoT device.
Train ML models for IOT applications, e.g. image classification. Start with a pre-trained model to fine tune for specific IOT scenario. Store model in registry. And convert to ONNX.
Create the IoT application in Python using the ML model with ONNX Runtime. Package this in a docker image for the target device and register image in container registry.
Deploy the container image to the edge device. Run inference sessions and send processed telemetry to cloud for the business application.
- Machine Learning life-cycle.
Bio: Manash Goswami is a product management leader with a passion to solve problems and build products that meet end-user needs. Grown from development - coding, test & verification, systems design & architecture, to product management and business development. Delivered consumer electronics products from smartwatch to tablets. Deep understanding of the Android, Chrome & Windows eco-system. Demonstrated leadership by managing through influence across a matrixed org structure. Capable of driving new initiative/agenda internally or with customers, enable collaboration across functional teams and drive for financial results. Excel at dealing with ambiguity and managing complex, cross-group interactions. Can successfully tie long-term strategic thinking with near-term execution.