Abstract: Machine learning models are increasingly used to inform high-stakes decisions about people. Discrimination in machine learning becomes objectionable when it places certain privileged groups at the systematic advantage and certain unprivileged groups at the systematic disadvantage. We have developed the AI Fairness 360 (AIF360), a comprehensive Python package (https://github.com/ibm/aif360) that contains nine different algorithms, developed by the broader algorithmic fairness research community, to mitigate that unwanted bias. AIF360 also provides an interactive experience (http://aif360.mybluemix.net/data) as a gentle introduction to the capabilities of the toolkit for people unfamiliar with Python programming. Compared to existing open source efforts on AI fairness, AIF360 takes a step forward in that it focuses on bias mitigation (as well as bias checking), industrial usability, and software engineering. In our proposed hands-on tutorial, we will teach participants to use and contribute to AIF360 enabling them to become some of the first members of the community. Toward this goal, all participants in this tutorial will get to experience first-hand: 1) how to use the metrics provided in the toolkit to check the fairness of an AI application, and 2) how to mitigate bias they discover. Our goal in creating a vibrant community, centered around the toolkit and its application, is to contribute to efforts to engender trust in AI and make the world more equitable for all.
Bio: Dr. Yunfeng Zhang is a research staff member at IBM T. J. Watson Research Center. His research interest is in human-computer interaction (HCI). His recent research projects involved designing intelligent conversational agents, creating multimodal interactive systems, modeling social interactions, and understanding and remediating cognitive biases. Previously, his PhD research focused on modeling and predicting human performance using cognitive architectures like ACT-R, EPIC, and Soar.