Abstract: The application of AI algorithms in domains such as criminal justice, credit scoring, and hiring holds unlimited promise. At the same time, it raises legitimate concerns about algorithmic fairness there’s now a growing demand for fairness, accountability, and transparency from machine learning (ML) systems. And we need to remember that training data isn’t the only source of possible bias and adversarial contamination. It can also be introduced through inappropriate data handling, inappropriate model selection, or incorrect algorithm design.
What we need is a pipeline that is open, transparent, secure and fair, and that fully integrates into the AI lifecycle. Such a pipeline requires a robust set of bias and adversarial checkers, “de-biasing” and ""defense"" algorithms, and explanations. In this talk we are going to discuss how to build such a pipeline leveraging open source projects such as AI Fairness 360 (AIF360), Adversarial Robustness Toolbox (ART), and Fabric for Deep Learning (FfDL), and Seldon"
Bio: Animesh Singh is an STSM and lead for IBM Watson and Cloud Platform, where he leads machine learning and deep learning initiatives on IBM Cloud and works with communities and customers to design and implement deep learning, machine learning, and cloud computing frameworks. He has a proven track record of driving design and implementation of private and public cloud solutions from concept to production. In his decade-plus at IBM, Animesh has worked on cutting-edge projects for IBM enterprise customers in the telco, banking, and healthcare Industries, particularly focusing on cloud and virtualization technologies, and led the design and development first IBM public cloud offering.