Evaluating the Social Impact of Generative AI Systems in Systems and Society

by   Irene Solaiman, et al.

Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.


page 1

page 2

page 3

page 4


From Human-Centered to Social-Centered Artificial Intelligence: Assessing ChatGPT's Impact through Disruptive Events

Large language models (LLMs) and dialogue agents have existed for years,...

Are You Worthy of My Trust?: A Socioethical Perspective on the Impacts of Trustworthy AI Systems on the Environment and Human Society

With ubiquitous exposure of AI systems today, we believe AI development ...

Art and the science of generative AI: A deeper dive

A new class of tools, colloquially called generative AI, can produce hig...

Algorithms as Social-Ecological-Technological Systems: an Environmental Justice Lens on Algorithmic Audits

This paper reframes algorithmic systems as intimately connected to and p...

Perspectives on the Social Impacts of Reinforcement Learning with Human Feedback

Is it possible for machines to think like humans? And if it is, how shou...

Generative AI trial for nonviolent communication mediation

Aiming for a mixbiotic society that combines freedom and solidarity amon...

A Network-centric Framework for Auditing Recommendation Systems

To improve the experience of consumers, all social media, commerce and e...

Please sign up or login with your details

Forgot password? Click here to reset