Release Strategies and the Social Impacts of Language Models

08/24/2019
by   Irene Solaiman, et al.
0

Large language models have a range of beneficial uses: they can assist in prose, poetry, and programming; analyze dataset biases; and more. However, their flexibility and generative capabilities also raise misuse concerns. This report discusses OpenAI's work related to the release of its GPT-2 language model. It discusses staged release, which allows time between model releases to conduct risk and benefit analyses as model sizes increased. It also discusses ongoing partnership-based research and provides recommendations for better coordination and responsible publication in AI.

READ FULL TEXT
research
04/07/2023

Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models

As the capabilities of generative language models continue to advance, t...
research
11/18/2022

Metadata Might Make Language Models Better

This paper discusses the benefits of including metadata when training la...
research
02/05/2023

FineDeb: A Debiasing Framework for Language Models

As language models are increasingly included in human-facing machine lea...
research
02/08/2021

How True is GPT-2? An Empirical Analysis of Intersectional Occupational Biases

The capabilities of natural language models trained on large-scale data ...
research
11/28/2022

The Myth of Culturally Agnostic AI Models

The paper discusses the potential of large vision-language models as obj...
research
01/28/2023

Truth Machines: Synthesizing Veracity in AI Language Models

As AI technologies are rolled out into healthcare, academia, human resou...
research
07/09/2023

Shaping the Emerging Norms of Using Large Language Models in Social Computing Research

The emergence of Large Language Models (LLMs) has brought both excitemen...

Please sign up or login with your details

Forgot password? Click here to reset