
Abstract: In both research and industry, discussion of “fair machine learning” has exploded in the past few years. Yet there is often a gap between what is available in academia and the constraints and needs of a real-world organization. In this talk, co-presented with Humana, we discuss Humana’s journey towards achieving informed, responsible use of machine learning to improve health outcomes. First, Humana implemented organizational and process-based tools for governance. Having set the stage for actively improving models, however, Humana’s data scientists then realized that none of the popular, published approaches to achieving fairness were applicable to their goals: the way Humana deployed and used machine learning violated assumptions made by many available “fair ML” methods. In the latter half of this talk, we show how these constraints motivated novel research questions and guided the development of an academic research project; explain and demonstrate the method we came up with; and discuss considerations as we fold this research work back into the product so that it is ultimately usable in a real-world production setting.
Bio: Jessica Dai is a Machine Learning Engineer at Arthur AI, where she works on research and development for fairness-related features. Previously, she conducted research with collaborators from CMU, Harvard, and Brown.

Jessica Dai
Title
Machine Learning Engineer | Arthur AI
