A Cost Analysis of Generative Language Models and Influence Operations

08/07/2023
by   Micah Musser, et al.
0

Despite speculation that recent large language models (LLMs) are likely to be used maliciously to improve the quality or scale of influence operations, uncertainty persists regarding the economic value that LLMs offer propagandists. This research constructs a model of costs facing propagandists for content generation at scale and analyzes (1) the potential savings that LLMs could offer propagandists, (2) the potential deterrent effect of monitoring controls on API-accessible LLMs, and (3) the optimal strategy for propagandists choosing between multiple private and/or open source LLMs when conducting influence operations. Primary results suggest that LLMs need only produce usable outputs with relatively low reliability (roughly 25 cost savings to propagandists, that the potential reduction in content generation costs can be quite high (up to 70 that monitoring capabilities have sharply limited cost imposition effects when alternative open source models are available. In addition, these results suggest that nation-states – even those conducting many large-scale influence operations per year – are unlikely to benefit economically from training custom LLMs specifically for use in influence operations.

READ FULL TEXT

page 6

page 10

research
01/10/2023

Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations

Generative language models have improved drastically, and can now produc...
research
05/26/2023

Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance

As large language models (LLMs) are continuously being developed, their ...
research
06/03/2022

Automatic Generation of Programming Exercises and Code Explanations using Large Language Models

This article explores the natural language generation capabilities of la...
research
09/07/2023

Social Media Influence Operations

Social media platforms enable largely unrestricted many-to-many communic...
research
03/25/2023

Can Large Language Models assist in Hazard Analysis?

Large Language Models (LLMs), such as GPT-3, have demonstrated remarkabl...
research
05/22/2023

Quantifying Association Capabilities of Large Language Models and Its Implications on Privacy Leakage

The advancement of large language models (LLMs) brings notable improveme...
research
11/02/2020

Deception and the Strategy of Influence

Organizations have long used deception as a means to exert influence in ...

Please sign up or login with your details

Forgot password? Click here to reset