
Abstract: We shouldn’t fear bias in AI– instead, we should see it as a massive opportunity.
Media leaps upon every public exposure of AI bias, and turns it into a full-blown scandal. And the scandals are beginning to pile up: Amazon’s recruiting tool, Apple’s credit card limits, Google’s facial recognition, and dozens more from large and small companies alike. With each scandal, reporters and corporate leaders seek to point fingers at the algorithm and its designers. However, neither of them represent the actual source of the bias, and this type of retroactive moral accounting does little to improve the underlying process.
While corporate accountability is critical, AI does not create bias alone; it exposes the latent bias present in the system it was designed to imitate. We need to reframe the conversation around bias in AI to instead identify it as the first step in building a more ethical system.
One of the benefits of machine learning is that it makes the implicit bias of a human institution explicit. Bias becomes diagnosable, correctable, and ultimately preventable in a way that cannot be replicated in human decision-making, which is opaque and difficult to change. Bias is not new, but AI represents a new toolset to measure and change it.
The question is not whether or not you have bias in your institution, but how you plan to handle it. The real power of AI isn’t just to create a more efficient version of the past– it’s to define and build the future that we want to see.
Bio: Jett Oristaglio is the Data Science and Product Lead of Trusted AI at DataRobot. He has a background in Cognitive Science, with focuses in computer vision, neuro-ethics, and transcendent states of consciousness. His primary mission at DataRobot is to answer the questions: "What is everything you need in order to trust a decision-making system with your life? And what tools can we build to automated that process as comprehensively as possible?