
Abstract: We have developed new tools to aid in the design of admissible learning algorithms which are efficient (enjoy good predictive accuracy), fair (minimize discrimination against minority groups), and interpretable (provide mechanistic understanding) to the best possible extent.
Admissible ML introduces two methodological tools:
- Infogram, an “information diagram”, is a new graphical feature-exploration method that facilitates the development of admissible machine learning methods.
- L-features, which mitigate unfairness, offer ways to systematically discover the hidden problematic proxy features from a dataset. L-features are inadmissible features.
In this workshop, I will walk you through some worked examples using open-source H2O (available in both Python and R). You will learn how to build models with good accuracy while keeping fairness and interpretability in mind. The worked examples can also be used as templates for you to try H2O with your own datasets.
Session Outline:
1) Identify admissible features from datasets using H2O’s implementation of Infogram
2) Build high-quality models with admissible features using H2O AutoML
3) Explain predictions using H2O’s model explainability toolkit
4) Try it with your own datasets
Background Knowledge:
Basic Python or R
Bio: Jo-fai (or Joe) has multiple roles (data scientist / evangelist / community manager) at H2O.ai. Since joining the company in 2016, Joe has delivered H2O talks/workshops in 40+ cities around Europe, US, and Asia. Nowadays, he is best known as the H2O #360Selfie guy. He is also the co-organiser of H2O's EMEA meetup groups including London Artificial Intelligence & Deep Learning - one of the biggest data science communities in the world with more than 11,000 members (https://www.meetup.com/London-Artificial-Intelligence-Deep-Learning/).

Jo-fai Chow, PhD
Title
Senior Data Science Evangelist | H2O.ai
