Abstract: Enabling responsible development of artificial intelligent technologies is one of the major challenges we face as the field moves from research to practice. Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learning in many current and future real-world applications. Now there are calls from across the industry (academia, government, and industry leaders) for technology creators to ensure that AI is used only in ways that benefit people and “to engineer responsibility into the very fabric of the technology.” Overcoming these challenges and enabling responsible development is essential to ensure a future where AI and machine learning can be widely used. In this talk we will discuss Responsible AI tools best practices you could apply in your machine learning lifecycle and share state-of-the-art open source tools you can incorporate to implement Responsible AI in practice.
Bio: Mehrnoosh Sameki is a principal PM manager at Microsoft, where she leads emerging Responsible AI technology and tools and for the Azure Machine Learning platform. She has cofounded Error Analysis, Fairlearn and Responsible AI Toolbox and has been a contributor to the InterpretML offering. She earned her PhD degree in computer science at Boston University, where she currently serves as an adjunct assistant professor, offering courses in responsible AI. Previously, she was a data scientist in the retail space, incorporating data science and machine learning to enhance customers’ personalized shopping experiences.