Indexing AI Risks with Incidents, Issues, and Variants

11/18/2022
by   Sean McGregor, et al.
0

Two years after publicly launching the AI Incident Database (AIID) as a collection of harms or near harms produced by AI in the world, a backlog of "issues" that do not meet its incident ingestion criteria have accumulated in its review queue. Despite not passing the database's current criteria for incidents, these issues advance human understanding of where AI presents the potential for harm. Similar to databases in aviation and computer security, the AIID proposes to adopt a two-tiered system for indexing AI incidents (i.e., a harm or near harm event) and issues (i.e., a risk of a harm event). Further, as some machine learning-based systems will sometimes produce a large number of incidents, the notion of an incident "variant" is introduced. These proposed changes mark the transition of the AIID to a new version in response to lessons learned from editing 2,000+ incident reports and additional reports that fall under the new category of "issue."

READ FULL TEXT

page 1

page 2

page 3

research
01/12/2022

The Human Factor in AI Safety

AI-based systems have been used widely across various industries for dif...
research
04/22/2021

Understanding and Avoiding AI Failures: A Practical Guide

As AI technologies increase in capability and ubiquity, AI accidents are...
research
03/19/2022

Data Smells: Categories, Causes and Consequences, and Detection of Suspicious Data in AI-based Systems

High data quality is fundamental for today's AI-based systems. However, ...
research
11/17/2020

Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database

Mature industrial sectors (e.g., aviation) collect their real world fail...
research
02/05/2019

Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI

This is an integrative review that address the question, "What makes for...
research
09/23/2021

Discovering and Validating AI Errors With Crowdsourced Failure Reports

AI systems can fail to learn important behaviors, leading to real-world ...
research
02/11/2022

Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

In the long term, reinforcement learning (RL) is considered by many AI t...

Please sign up or login with your details

Forgot password? Click here to reset