Understanding and Avoiding AI Failures: A Practical Guide

04/22/2021
by   Robert M. Williams, et al.
61

As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give a more complete picture of the risks of contemporary AI. By focusing on system properties near accidents instead of seeking a root cause of accidents, we identify where attention should be paid to safety for current generation AI systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2022

X-Risk Analysis for AI Research

Artificial intelligence (AI) has the potential to greatly improve societ...
research
10/07/2022

Mutual Theory of Mind for Human-AI Communication

From navigation systems to smart assistants, we communicate with various...
research
11/18/2022

Indexing AI Risks with Incidents, Issues, and Variants

Two years after publicly launching the AI Incident Database (AIID) as a ...
research
12/13/2022

Redefining Relationships in Music

AI tools increasingly shape how we discover, make and experience music. ...
research
05/30/2020

AI Research Considerations for Human Existential Safety (ARCHES)

Framed in positive terms, this report examines how technical AI research...
research
04/01/2022

Designing AI for Online-to-Offline Safety Risks with Young Women: The Context of Social Matching

In this position paper we draw attention to safety risks against youth a...
research
05/24/2023

A Game-Theoretic Framework for AI Governance

As a transformative general-purpose technology, AI has empowered various...

Please sign up or login with your details

Forgot password? Click here to reset