
Abstract: What is explainability? Why is it important? What techniques are there and how do they work?
If you've ever asked one of the questions above, then this talk is for you! You'll learn how the ability to interpret a model can identify poor model performance or, worse, bias that could ultimately impact the fairness of your machine learning applications. You'll learn about some of the most common algorithms, how they work and how you can get started using them yourself.
First we’ll cover why the way we do machine learning currently leads to blind confidence in our models. We’ll take a look at the dangers of this approach and the motivations for explainable techniques. We’ll then look at different algorithms for explaining machine learning models across different modalities, break those down so that they make sense, and look at how to start applying them. Finally, we’ll touch on how introducing machine learning explainability can reduce bias in your AI systems, increase transparency and build systems that are more fair in their implementation.
Background Knowledge:
No familiarity with tooling required. An understanding of machine learning will be beneficial.
Bio: Ed Shee, Head of Developer Relations at Seldon. Having previously led a tech team at IBM, Ed comes from a cloud computing background and is a strong believer in making deployments as easy as possible for developers. With an education in computational modelling and an enthusiasm for machine learning, Ed has blended his work in ML and cloud native computing together to cement himself firmly in the emerging field of MLOps.