Hands-On AI Risk Management: Utilizing the NIST AI RMF and LLMs

Abstract: 

As artificial intelligence (AI) systems evolve and integrate into diverse sectors, it becomes imperative to assess and address their associated risks. This workshop offers a comprehensive introduction to AI risk, presenting it in an accessible manner. We'll delve into how emerging standards and LLM tools are pivotal in identifying and managing these risks.

We'll explore the development of AI risk profiles, which are standardized pre-deployment disclosures highlighting key risks such as fairness, security, and societal impact. I will introduce a proposed risk taxonomy, encompassing vital risk categories relevant across various industries. These profiles play a significant role in AI governance across the entire value chain. For instance:

AI developers benefit from detailed risk profiling, guiding them in technical safety and documentation.
Those in procurement and deployment can leverage these profiles for more informed decision-making regarding acquisition and governance.
For regulators and policymakers, these standardized disclosures streamline the process of establishing accountability standards and overseeing compliance.
To provide a practical perspective, I will demonstrate the process of crafting risk profiles. We'll employ an approach centered on identifying risk scenarios, especially concerning fairness and other dimensions of responsible AI. This method is grounded in the AI RMF (risk management framework) by NIST, supplemented by LLM tools and external documentation. Participants will engage in a hands-on session, delving into a pertinent use case, drafting a risk scenario, and pinpointing potential mitigation strategies.

Session Outline:

Lesson 1: Introduction to AI Risk Management

Introduction to the concept of AI risk in the modern technological landscape.
Overview a proposed risk taxonomy and dive into specific areas like fairness, security, societal impact, and more.
Explore the significance of understanding and categorizing AI risks across different sectors.

Objective: By the end of this lesson, participants will have a comprehensive understanding of the AI risk taxonomy and its implications across various industries.

Lesson 2: Introduction to Standards and the NIST AI RMF
Understand the role of standards in AI risk management.
Introduction to the NIST AI RMF (risk management framework) and its relevance in AI risk management.
Explore how standards, including the NIST AI RMF, guide the creation and application of AI risk profiles.
Briefly touch on Article 9 of the EU AI Act, requiring risk management practices for high risk AI systems
Objective: Participants will gain insights into the importance of standards in AI risk management and will be introduced to the NIST AI RMF

Lesson 3: Practical Application with the NIST AI RMF and LLMs

Introduction to the NIST AI RMF (risk management framework) and its relevance in AI risk management.
Explore LLM support and external research in risk profiling.
Workshop activity:
Read and discuss a few controls of the NIST AI RMF
Create a first pass identification of risk scenarios based a participant-selected use case
Support individual work by collaboratively discovering risk scenarios using an LLM. Prompts will be provided to get people started.

Objective: Participants will gain hands-on experience with the NIST AI RMF and be equipped with introductory knowledge to discover AI risk scenarios


Lesson 4: Mitigating AI Risk Scenarios with LLMs
Workshop activity:
Identify potential mitigation approaches
Discuss the degree to which the mitigation is successful

Objective: By the end of this lesson, participants will have began thinking about mitigation approaches and developing an AI system in an iterative, risk-sensitive manner.

By the end of this workshop, participants will be have begun to explore the knowledge and tools needed to identify, assess, and mitigate AI risks using standardized profiles, emerging standards, and practical approaches grounded in the NIST AI RMF and LLM tools.

Background Knowledge:

Building and/or interacting with AI systems will be helpful.

Bio: 

Ian Eisenberg is Head of AI Governance Research at Credo AI, where he advances best practices in AI governance to support Credo AI’s product and policy strategy. He is also the founder of the AI Salon, an SF-based group supporting conversations on the meaning and impact of AI. His interest in AI started as a cognitive neuroscientist at Stanford, which developed into a focus on the sociotechnical challenges of AI technologies and reducing AI risk. Ian has been a researcher at Stanford, the NIH, Columbia and Brown University. He received his PhD from Stanford University, and BS from Brown University.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google