AHA!: Facilitating AI Impact Assessment by Generating Examples of Harms

06/05/2023
by   Zana Buçinca, et al.
0

While demands for change and accountability for harmful AI consequences mount, foreseeing the downstream effects of deploying AI systems remains a challenging task. We developed AHA! (Anticipating Harms of AI), a generative framework to assist AI practitioners and decision-makers in anticipating potential harms and unintended consequences of AI systems prior to development or deployment. Given an AI deployment scenario, AHA! generates descriptions of possible harms for different stakeholders. To do so, AHA! systematically considers the interplay between common problematic AI behaviors as well as their potential impacts on different stakeholders, and narrates these conditions through vignettes. These vignettes are then filled in with descriptions of possible harms by prompting crowd workers and large language models. By examining 4113 harms surfaced by AHA! for five different AI deployment scenarios, we found that AHA! generates meaningful examples of harms, with different problematic AI behaviors resulting in different types of harms. Prompting both crowds and a large language model with the vignettes resulted in more diverse examples of harms than those generated by either the crowd or the model alone. To gauge AHA!'s potential practical utility, we also conducted semi-structured interviews with responsible AI professionals (N=9). Participants found AHA!'s systematic approach to surfacing harms important for ethical reflection and discovered meaningful stakeholders and harms they believed they would not have thought of otherwise. Participants, however, differed in their opinions about whether AHA! should be used upfront or as a secondary-check and noted that AHA! may shift harm anticipation from an ideation problem to a potentially demanding review problem. Drawing on our results, we discuss design implications of building tools to help practitioners envision possible harms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2023

AI and the creative realm: A short review of current and future applications

This study explores the concept of creativity and artificial intelligenc...
research
04/02/2023

Towards Healthy AI: Large Language Models Need Therapists Too

Recent advances in large language models (LLMs) have led to the developm...
research
05/29/2023

AI Audit: A Card Game to Reflect on Everyday AI Systems

An essential element of K-12 AI literacy is educating learners about the...
research
02/15/2022

Predictability and Surprise in Large Generative Models

Large-scale pre-training has recently emerged as a technique for creatin...
research
08/09/2023

Where's the Liability in Harmful AI Speech?

Generative AI, in particular text-based "foundation models" (large model...
research
06/06/2022

Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and Desiderata

Data is central to the development and evaluation of machine learning (M...
research
12/21/2022

Crowd Score: A Method for the Evaluation of Jokes using Large Language Model AI Voters as Judges

This paper presents the Crowd Score, a novel method to assess the funnin...

Please sign up or login with your details

Forgot password? Click here to reset