Abstract: With the rise of AI, computers are able to take on more tasks usually done by humans with an increase in efficiency and productivity. From manufacturing to finance, industries realize the importance of AI and are exploring how best to adopt AI into their workstreams. However, the inhibiting factor is a lack of trust in AI. We’ve seen several cases of AI deployments being rolled back due to negative publicity related to bias and trustworthiness issues. Recognizing these risks, governments are introducing regulations to help consumers understand AI-made decisions. Enterprises need an explainable approach to AI – an approach that ensures they can better manage the business risks associated with deploying AI in use cases from underwriting loans and fraud detection to automated diagnostics and content moderation.
This session will look at how ‘Explainable AI’ fills a critical gap in operationalizing AI. Some examples - explaining ML-flagged fraud transactions, explaining policy underwriting decisions, explaining loan denial by ML models, and explaining business intelligence like customer churn, regional marketing campaigns, and more. Adopting an explainable approach to AI and integrating it into the end-to-end ML workflow, from training to production, offers benefits such as the early identification of biased data and better confidence in model outputs.
People will walk away with a better understanding of:
- Questions to ask about AI systems to better manage bias
- Why AI makes certain predictions and the contributing factors behind those decisions
- Anticipating customer concerns and how best to answer them
- The importance of explainability in AI
Bio: Krishna is the co-founder and CEO of Fiddler Labs, an enterprise startup building an Explainable AI Engine to address problems regarding bias, fairness and transparency in AI. At Facebook, he led the team that built Facebook’s explainability feature ‘Why am I seeing this?’. He’s an entrepreneur with a technical background with experience creating scalable platforms and expertise in converting data into intelligence. Having held senior engineering leadership roles at Facebook, Pinterest, Twitter and Microsoft, he’s seen the effects that bias has on AI and machine learning decision making processes, and with Fiddler, his goal is to enable enterprises across the globe solve this problem.