Abstract: Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law
The EU Commission recently published the Artificial Intelligence Act – the world’s first comprehensive framework to regulate AI. The new proposal has several provisions that require bias testing and monitoring. But is Europe ready for this task?
I will talk about how the normative idea and aim behind EU non-discrimination law is “substantive equality” and so to actively dismantle inequality. As we show this means that most fairness metrics clash with this idea because they freeze the status quo. But the status quo is not neutral.
We analysed 20 bias tests according to their compatibility with EU non-discrimination law and came up with a classification system (‘bias preserving’ and ‘bias transforming’ fairness metrics). We argue that the use of ‘bias preserving’ fairness metrics requires legal justification if used to make decisions about people in Europe. We instead recommend to use ‘bias transforming’ fairness metrics. ‘Bias preserving’ fairness metrics can still be used under certain circumstances, but the choice of metrics is not a neutral act and we have to be aware of the normative assumptions behind these metrics when deploying them in practice.
We provide concrete recommendations including a user-friendly checklist for choosing the most appropriate fairness metric for uses of machine learning and AI under EU non-discrimination law.
Bio: Professor Sandra Wachter is an Associate Professor and Senior Research Fellow focusing on law and ethics of AI, Big Data, and robotics as well as Internet regulation at the Oxford Internet Institute at the University of Oxford. Professor Wachter is specialising in technology-, IP-, data protection and non-discrimination law as well as European-, International-, (online) human rights,- and medical law. Her current research focuses on the legal and ethical implications of AI, Big Data, and robotics as well as profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, governmental surveillance, predictive policing, and human rights online. At the OII, Professor Sandra Wachter also coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies. Professor Wachter is also a Fellow at the Alan Turing Institute in London, a Fellow of the World Economic Forum’s Global Futures Council on Values, Ethics and Innovation, a Faculty Associate at The Berkman Klein Center for Internet & Society at Harvard University, an Academic Affiliate at the Bonavero Institute of Human Rights at Oxford’s Law Faculty, a Member of the European Commission’s Expert Group on Autonomous Cars, a member of the Law Committee of the IEEE and a Member of the World Bank’s task force on access to justice and technology.