Abstract: How can you accelerate Data Science at the Edge? We are all familiar with being able to accelerate model learning and training through the use of GPU Acceleration, built-in processors and embedded software. With limited CPU, memory and storage at the edge, these traditional approaches to acceleration can be a challenge. And it’s not only at the ‘model and application level’ where data science can be accelerated. We can also accelerate data science by improving our infrastructure management and deployment methods (e.g. OS image generation and management).
Some scientists have turned to pruning and quantizing their larger models which can make the models smaller and easier to deploy to the Edge. Although pruning and quantization can produce more performant models, the models themselves can be less accurate. And model accuracy is something that many companies cannot sacrifice. What options are left?
In this session, we will first arrive at a definition of Edge and its different environments. This will enable us to discuss what hardware (compute) is available for data science (model training and execution) usage in these environments. Lastly, we will examine interesting data science acceleration alternatives in the areas of data augmentation and data curation strategies, containerized models and applications.
Join me, for a gentle introduction, as I discuss how we can accelerate Data Science at the Edge.
Bio: Audrey is a Sr. Principal Software Engineer in the Red Hat Cloud Services - Red Hat OpenShift Data Science team focusing on helping customers with managed services, AI/ML workloads and next-generation platforms. She holds a degree in Computer Information Systems and has been working in the IT Industry for over 20 years in full stack development to data science roles. Audrey is passionate about Data Science and in particular the current opportunities with AIML at the Edge and Open Source technologies.