DeepAI AI Chat
Log In Sign Up

Ethical and social risks of harm from Language Models

12/08/2021
by   Laura Weidinger, et al.
0

This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences. We outline six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms. The first area concerns the perpetuation of stereotypes, unfair discrimination, exclusionary norms, toxic language, and lower performance by social group for LMs. The second focuses on risks from private data leaks or LMs correctly inferring sensitive information. The third addresses risks arising from poor, false or misleading information including in sensitive domains, and knock-on risks such as the erosion of trust in shared information. The fourth considers risks from actors who try to use LMs to cause harm. The fifth focuses on risks specific to LLMs used to underpin conversational agents that interact with human users, including unsafe use, manipulation or deception. The sixth discusses the risk of environmental harm, job automation, and other challenges that may have a disparate effect on different social groups or communities. In total, we review 21 risks in-depth. We discuss the points of origin of different risks and point to potential mitigation approaches. Lastly, we discuss organisational responsibilities in implementing mitigations, and the role of collaboration and participation. We highlight directions for further research, particularly on expanding the toolkit for assessing and evaluating the outlined risks in LMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/25/2019

Caveat emptor: the risks of using big data for human development

Big data revolution promises to be instrumental in facilitating sustaina...
09/15/2020

The Radicalization Risks of GPT-3 and Advanced Neural Language Models

In this paper, we expand on our previous research of the potential for a...
02/06/2023

Concrete Safety for ML Problems: System Safety for ML Development and Assessment

Many stakeholders struggle to make reliances on ML-driven systems due to...
02/16/2023

Auditing large language models: a three-layered approach

The emergence of large language models (LLMs) represents a major advance...
06/22/2020

Stablecoins 2.0: Economic Foundations and Risk-based Models

Stablecoins are one of the most widely capitalized type of cryptocurrenc...
06/11/2022

Toward a Commonly Shared Public Policy Perspective for Analyzing Risk Coping Strategies

The concept of risk has received scholarly attention from a variety of a...
11/30/2022

Risks to Zero Trust in a Federated Mission Partner Environment

Recent cybersecurity events have prompted the federal government to begi...