How to Make Machine Learning Fair and Accountable
How to Make Machine Learning Fair and Accountable


The suitability of Machine Learning models are traditionally measured on its accuracy. Metrics like RMSE, MAPE, AUC, ROC, Gini etc largely decide the ‘fate’ of Machine Learning models. However, if one digs deeper the ‘fate’ of Machine Learning models going beyond a few accuracies driven metrics to its capability of being Fair, Accountable, Transparent and Explainable a.k.a FATE.
Machine Learning, as the name implies, learns whatever it is taught. It’s a ramification of what it is fed. It’s a fallacy that ML don’t have perspective, it has the same perspective that the data has which was used to make it learn what its preaches today. In simple words, algorithms can echo prejudices that data explicitly or implicitly has.
Today when everything around us is affected by Machine Learning driven decisions, a few metrics derived from a black box can be highly disturbing. From getting a admission to a college (ML based selection), to getting a job (ML based job recommendation), to getting a financial service (ML based credit scoring) to even getting prosecuted (ML based recidivism prediction), every stage of life is being heavily governed by few algorithms. But the question today is how Fair and accountable are these algorithms that so widely used. The first in the series would talk about are ML models fair and if not, how can they be made fair.
Recently it was found that face recognition algorithms that are available as open-source has lower accuracy on females faces with darker skin color than vice versa. In another instance, A research by CMU showed how google Ad showed an ad for high income jobs to men more often than women
Lets start with a very infamous example where courts in America use COMPAS algorithm for predicting recidivism prediction. The algorithm generated a score for that was used by US court to decide ‘the likelihood of a future crime’. Interestingly, after months it was found that the algorithm was biased against a particular race type and gave people of that generally a high risk score. These kinds of algorithms have become increasingly common in court room, board room and banking desks.

In this regards, we would see how to make Model Fair and Accountable:
1. Fairness Metrics (statistical parity difference, equal opportunity difference, disparate impact)
2. Model Fairness (Prejudice remover, meta-classifier)
3. Prediction Fairness (equalized odds, calibrated equalized odds)
4. Accountability of features (sensitivity, weight of evidence, lifs/gains, accuracy drifts)
5. Transparency of Models (sensitivity, residuals, accumulated local effects)
6. Explainability of Modelling results
7. Stability indices (Population and Category Stability Index)
8. Accuracy changes (Gini, lift, RMSE, MAPE, Linear four rates, distance measures, Lorenz curve, KS stat)
9. Sensitivity (Individual Conditional Expectation and Partial Dependency Plots)
10. Decile / Score pattern to see the distribution of predictions / scores
11. Drift detections (Early drift detection)


Sray is currently working for Publicis Sapient as a Data Scientist and is based out of London. His expertise lies in Predictive Modelling, Forecasting and advanced Machine Learning. He possesses a deep understanding of algorithms and advanced statistics. He has a background in management, economic and has done a master equivalent program in Data Science and Analytics. His current area of interest are Fair and Explainable ML.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google