Responsible AI – State of the Art and Future Directions
Responsible AI – State of the Art and Future Directions

Abstract: 

Enabling responsible development of artificial intelligent technologies is one of the major challenges we face as the field moves from research to practice. Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learning in many current and future real-world applications.  Now there are calls from across the industry (academia, government, and industry leaders) for technology creators to ensure that AI is used only in ways that benefit people and “to engineer responsibility into the very fabric of the technology.”   Overcoming these challenges and enabling responsible development is essential to ensure a future where AI and machine learning can be widely used. In this talk we will cover six principles of development and deployment of trustworthy AI systems:  Four core principles of fairness, reliability/safety, privacy/security, and inclusiveness, underpinned by two foundational principles of transparency and accountability. We present on how each principle plays a key role in responsible AI and what it means to take these principles from theory to practice. We will cover open source products across different area of responsible AI umbrella, particularly transparency and interpretability for tabular and text data and AI fairness that aims to empower researchers, data scientists, and machine learning developers to take a significant step forward in this space, building trust between users and AI systems.

Responsible AI is an umbrella term for many themes associated with the intersection of ethics and AI.  One reasonable enumeration is Microsoft’s 6 Principles for AI development:  Four core principles of fairness, reliability/safety, privacy/security, and inclusiveness, underpinned by two foundational principles of transparency and accountability. For this presentation, we focus on Transparency (Interpretability), Fairness and Inclusiveness, and Privacy as major principles of responsible AI and cover best practices and state-of-the-art open source toolkits and offerings that target researchers, data scientists, machine learning developers, and business stakeholders to be able to build trustable, more transparent AI systems.

Attendees will leave the session with the basic understanding of responsible AI principles, best practices and open source tools around responsible development and deployment of AI systems. They will be able to incorporate the introduced tools and products in their machine learning life cycle, running them on their previously-trained models to understand the factors that went into their model predictions, and verify their model fairness across protected attributes and mitigate the existing bias.

Bio: 

Ehi Nosakhare is a Data and Applied Scientist in the AI development and acceleration program at Microsoft. She designs, develops and leads the implementation of machine learning (ML) solutions in application projects across Microsoft’s products and services. She is currently focused on developing a toolkit that enables text interpretability and machine learning transparency more broadly. Prior to Microsoft, she completed a Ph.D. in Electrical Engineering and Computer Science (EECS) from the Massachusetts Institute of Technology (MIT). She is very passionate about using ML to solve real-world problems and studying the ethical implications of ML/AI. In her spare time, she enjoys reading and re-learning to play the cello.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google