Release Strategies and the Social Impacts of Language Models

08/24/2019
by   Irene Solaiman, et al.
0

Large language models have a range of beneficial uses: they can assist in prose, poetry, and programming; analyze dataset biases; and more. However, their flexibility and generative capabilities also raise misuse concerns. This report discusses OpenAI's work related to the release of its GPT-2 language model. It discusses staged release, which allows time between model releases to conduct risk and benefit analyses as model sizes increased. It also discusses ongoing partnership-based research and provides recommendations for better coordination and responsible publication in AI.

READ FULL TEXT
04/27/2017

Duluth at SemEval-2017 Task 6: Language Models in Humor Detection

This paper describes the Duluth system that participated in SemEval-2017...
05/23/2022

Challenges in Measuring Bias via Open-Ended Language Generation

Researchers have devised numerous ways to quantify social biases vested ...
02/08/2021

How True is GPT-2? An Empirical Analysis of Intersectional Occupational Biases

The capabilities of natural language models trained on large-scale data ...
03/06/2022

Leashing the Inner Demons: Self-Detoxification for Language Models

Language models (LMs) can reproduce (or amplify) toxic language seen dur...
03/10/2022

Internet-augmented language models through few-shot prompting for open-domain question answering

In this work, we aim to capitalize on the unique few-shot capabilities o...
08/18/2022

Using Large Language Models to Simulate Multiple Humans

We propose a method for using a large language model, such as GPT-3, to ...
02/05/2022

Ethics, Rules of Engagement, and AI: Neural Narrative Mapping Using Large Transformer Language Models

The problem of determining if a military unit has correctly understood a...