Abstract: With the Generative AI buzzing all around, the adoption of LLM's have become imperative to any business's growth.Despite their efficacy, these models often resemble black boxes, where the inner workings are opaque and the decision-making process is obscured. The lack of transparency not only hampers trust but also poses legal and ethical challenges. This session titled ""Why Did My AI Do That? Decoding Decision-making in Machine Learning"" delves deep into state-of-the-art techniques for AI explainability, providing a robust framework for understanding, evaluating, and enhancing model interpretability.
We will start by exploring the landscape of model explainability, distinguishing between ""intrinsic"" and ""post-hoc"" methods. Intrinsic methods, like decision trees and linear models, are naturally interpretable but may fall short in performance. Post-hoc methods, applicable to complex models like neural networks, are the focus of our session. We'll dive into techniques such as Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Integrated Gradients. Through hands-on examples, we'll demonstrate how these methods dissect a model's decision boundary, feature importance, and decision paths.
The session will also cover practical implementation using Python libraries like sklearn, lime, and shap, to empower attendees to apply these methodologies to their own models. We'll further discuss the trade-offs between accuracy and explainability, helping practitioners strike a responsible balance.
Finally, I'll touch up on ethical considerations that arise from model opacity, such as algorithmic bias and the potential for unjust decision-making, and showcase how explainability techniques can serve as diagnostic tools for uncovering such issues.
Python - Intermediate , familiarity with Jupyter Notebook, Scikit-Learn and Pandas
Bio: Swagata is a Data Professional with over 6 years experience in Healthcare, Retail and Platform Integration industry. She is an avid blogger and writes about state of the art developments in the AI space. She is particularly interested in Natural Language Processing, and focuses on researching how to make NLP models work in practical setting. In her spare time, she loves to play her guitar, sip masala chai and find new spots for doing Yoga. Connect with her here – https://www.linkedin.com/in/swagata-ashwani/