The Radicalization Risks of GPT-3 and Advanced Neural Language Models

09/15/2020
by   Kris McGuffie, et al.
0

In this paper, we expand on our previous research of the potential for abuse of generative language models by assessing GPT-3. Experimenting with prompts representative of different types of extremist narrative, structures of social interaction, and radical ideologies, we find that GPT-3 demonstrates significant improvement over its predecessor, GPT-2, in generating extremist texts. We also show GPT-3's strength in generating text that accurately emulates interactive, informational, and influential content that could be utilized for radicalizing individuals into violent far-right extremist ideologies and behaviors. While OpenAI's preventative measures are strong, the possibility of unregulated copycat technology represents significant risk for large-scale online radicalization and recruitment; thus, in the absence of safeguards, successful and efficient weaponization that requires little experimentation is likely. AI stakeholders, the policymaking community, and governments should begin investing as soon as possible in building social norms, public policy, and educational initiatives to preempt an influx of machine-generated disinformation and propaganda. Mitigation will require effective policy and partnerships across industry, government, and civil society.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

12/08/2021

Ethical and social risks of harm from Language Models

This paper aims to help structure the risk landscape associated with lar...
02/11/2020

The Rumour Mill: Making the Spread of Misinformation Explicit and Tangible

Misinformation spread presents a technological and social threat to soci...
02/11/2020

The Rumour Mill: Making Misinformation Spread Visible and Tangible

The spread of misinformation presents a technological and social threat ...
06/15/2020

The Social Contract for AI

Like any technology, AI systems come with inherent risks and potential b...
02/19/2020

Attacking Neural Text Detectors

Machine learning based language models have recently made significant pr...
09/15/2020

Critical Thinking for Language Models

This paper takes a first step towards a critical thinking curriculum for...
10/19/2021

Risks of AI Foundation Models in Education

If the authors of a recent Stanford report (Bommasani et al., 2021) on t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.