On the Amplification of Linguistic Bias through Unintentional Self-reinforcement Learning by Generative Language Models – A Perspective

06/12/2023
by   Minhyeok Lee, et al.
0

Generative Language Models (GLMs) have the potential to significantly shape our linguistic landscape due to their expansive use in various digital applications. However, this widespread adoption might inadvertently trigger a self-reinforcement learning cycle that can amplify existing linguistic biases. This paper explores the possibility of such a phenomenon, where the initial biases in GLMs, reflected in their generated text, can feed into the learning material of subsequent models, thereby reinforcing and amplifying these biases. Moreover, the paper highlights how the pervasive nature of GLMs might influence the linguistic and cognitive development of future generations, as they may unconsciously learn and reproduce these biases. The implications of this potential self-reinforcement cycle extend beyond the models themselves, impacting human language and discourse. The advantages and disadvantages of this bias amplification are weighed, considering educational benefits and ease of future GLM learning against threats to linguistic diversity and dependence on initial GLMs. This paper underscores the need for rigorous research to understand and address these issues. It advocates for improved model transparency, bias-aware training techniques, development of methods to distinguish between human and GLM-generated text, and robust measures for fairness and bias evaluation in GLMs. The aim is to ensure the effective, safe, and equitable use of these powerful technologies, while preserving the richness and diversity of human language.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2023

A Survey on Fairness in Large Language Models

Large language models (LLMs) have shown powerful performance and develop...
research
08/01/2023

Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias

Recent studies show that instruction tuning and learning from human feed...
research
04/07/2023

What does ChatGPT return about human values? Exploring value bias in ChatGPT using a descriptive value theory

There has been concern about ideological basis and possible discriminati...
research
05/12/2022

Using Natural Sentences for Understanding Biases in Language Models

Evaluation of biases in language models is often limited to syntheticall...
research
12/28/2020

Interactions of Linguistic and Domain Overhypotheses in Category Learning

For humans learning to categorize and distinguish parts of the world, th...
research
05/17/2023

BAD: BiAs Detection for Large Language Models in the context of candidate screening

Application Tracking Systems (ATS) have allowed talent managers, recruit...
research
08/18/2023

Leveraging Large Language Models for DRL-Based Anti-Jamming Strategies in Zero Touch Networks

As the dawn of sixth-generation (6G) networking approaches, it promises ...

Please sign up or login with your details

Forgot password? Click here to reset