Ethical and social risks of harm from Language Models

12/08/2021
by   Laura Weidinger, et al.
0

This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences. We outline six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms. The first area concerns the perpetuation of stereotypes, unfair discrimination, exclusionary norms, toxic language, and lower performance by social group for LMs. The second focuses on risks from private data leaks or LMs correctly inferring sensitive information. The third addresses risks arising from poor, false or misleading information including in sensitive domains, and knock-on risks such as the erosion of trust in shared information. The fourth considers risks from actors who try to use LMs to cause harm. The fifth focuses on risks specific to LLMs used to underpin conversational agents that interact with human users, including unsafe use, manipulation or deception. The sixth discusses the risk of environmental harm, job automation, and other challenges that may have a disparate effect on different social groups or communities. In total, we review 21 risks in-depth. We discuss the points of origin of different risks and point to potential mitigation approaches. Lastly, we discuss organisational responsibilities in implementing mitigations, and the role of collaboration and participation. We highlight directions for further research, particularly on expanding the toolkit for assessing and evaluating the outlined risks in LMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/13/2023

Beyond the Safeguards: Exploring the Security Risks of ChatGPT

The increasing popularity of large language models (LLMs) such as ChatGP...
research
09/15/2020

The Radicalization Risks of GPT-3 and Advanced Neural Language Models

In this paper, we expand on our previous research of the potential for a...
research
07/08/2023

Typology of Risks of Generative Text-to-Image Models

This paper investigates the direct risks and harms associated with moder...
research
07/01/2023

Understanding Counterspeech for Online Harm Mitigation

Counterspeech offers direct rebuttals to hateful speech by challenging p...
research
05/05/2023

Large Language Models in Sport Science Medicine: Opportunities, Risks and Considerations

This paper explores the potential opportunities, risks, and challenges a...
research
06/22/2020

Stablecoins 2.0: Economic Foundations and Risk-based Models

Stablecoins are one of the most widely capitalized type of cryptocurrenc...
research
06/02/2023

Responsible Task Automation: Empowering Large Language Models as Responsible Task Automators

The recent success of Large Language Models (LLMs) signifies an impressi...

Please sign up or login with your details

Forgot password? Click here to reset