The Radicalization Risks of GPT-3 and Advanced Neural Language Models

09/15/2020
by   Kris McGuffie, et al.
0

In this paper, we expand on our previous research of the potential for abuse of generative language models by assessing GPT-3. Experimenting with prompts representative of different types of extremist narrative, structures of social interaction, and radical ideologies, we find that GPT-3 demonstrates significant improvement over its predecessor, GPT-2, in generating extremist texts. We also show GPT-3's strength in generating text that accurately emulates interactive, informational, and influential content that could be utilized for radicalizing individuals into violent far-right extremist ideologies and behaviors. While OpenAI's preventative measures are strong, the possibility of unregulated copycat technology represents significant risk for large-scale online radicalization and recruitment; thus, in the absence of safeguards, successful and efficient weaponization that requires little experimentation is likely. AI stakeholders, the policymaking community, and governments should begin investing as soon as possible in building social norms, public policy, and educational initiatives to preempt an influx of machine-generated disinformation and propaganda. Mitigation will require effective policy and partnerships across industry, government, and civil society.

READ FULL TEXT
research
12/08/2021

Ethical and social risks of harm from Language Models

This paper aims to help structure the risk landscape associated with lar...
research
05/29/2023

Voluminous yet Vacuous? Semantic Capital in an Age of Large Language Models

Large Language Models (LLMs) have emerged as transformative forces in th...
research
01/10/2023

Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations

Generative language models have improved drastically, and can now produc...
research
02/11/2020

The Rumour Mill: Making the Spread of Misinformation Explicit and Tangible

Misinformation spread presents a technological and social threat to soci...
research
02/11/2020

The Rumour Mill: Making Misinformation Spread Visible and Tangible

The spread of misinformation presents a technological and social threat ...
research
07/01/2023

Understanding Counterspeech for Online Harm Mitigation

Counterspeech offers direct rebuttals to hateful speech by challenging p...
research
11/28/2022

The Myth of Culturally Agnostic AI Models

The paper discusses the potential of large vision-language models as obj...

Please sign up or login with your details

Forgot password? Click here to reset