Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks

06/17/2022
by   Anthony M. Barrett, et al.
0

Artificial intelligence (AI) systems can provide many beneficial capabilities but also risks of adverse events. Some AI systems could present risks of events with very high or catastrophic consequences at societal scale. The US National Institute of Standards and Technology (NIST) is developing the NIST Artificial Intelligence Risk Management Framework (AI RMF) as voluntary guidance on AI risk assessment and management for AI developers and others. For addressing risks of events with catastrophic consequences, NIST indicated a need to translate from high level principles to actionable risk management guidance. In this document, we provide detailed actionable-guidance recommendations focused on identifying and managing risks of events with very high or catastrophic consequences, intended as a risk management practices resource for NIST for AI RMF version 1.0 (scheduled for release in early 2023), or for AI RMF users, or for other AI risk management guidance and standards as appropriate. We also provide our methodology for our recommendations. We provide actionable-guidance recommendations for AI RMF 1.0 on: identifying risks from potential unintended uses and misuses of AI systems; including catastrophic-risk factors within the scope of risk assessments and impact assessments; identifying and mitigating human rights harms; and reporting information on AI risk factors including catastrophic-risk factors. In addition, we provide recommendations on additional issues for a roadmap for later versions of the AI RMF or supplementary publications. These include: providing an AI RMF Profile with supplementary guidance for cutting-edge increasingly multi-purpose or general-purpose AI. We aim for this work to be a concrete risk-management practices contribution, and to stimulate constructive dialogue on how to address catastrophic risks and associated issues in AI standards.

READ FULL TEXT

page 1

page 17

page 18

page 21

page 23

research
08/26/2021

AI at work – Mitigating safety and discriminatory risk with technical standards

The use of artificial intelligence (AI) and AI methods in the workplace ...
research
07/17/2023

Risk assessment at AGI companies: A review of popular risk assessment techniques from other safety-critical industries

Companies like OpenAI, Google DeepMind, and Anthropic have the stated go...
research
06/21/2023

An Overview of Catastrophic AI Risks

Rapid advancements in artificial intelligence (AI) have sparked growing ...
research
05/09/2023

Could AI be the Great Filter? What Astrobiology can Teach the Intelligence Community about Anthropogenic Risks

Where is everybody? This phrase distills the foreboding of what has come...
research
12/21/2021

Validation and Transparency in AI systems for pharmacovigilance: a case study applied to the medical literature monitoring of adverse events

Recent advances in artificial intelligence applied to biomedical text ar...
research
10/06/2009

A multiagent urban traffic simulation. Part II: dealing with the extraordinary

In Probabilistic Risk Management, risk is characterized by two quantitie...
research
03/28/2020

AI reputational risk management

The benefits of AI are many. It can help tackle climate change, strength...

Please sign up or login with your details

Forgot password? Click here to reset