OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs

09/07/2023
by   Patrick Haller, et al.
0

Instruction-tuned Large Language Models (LLMs) have recently showcased remarkable ability to generate fitting responses to natural language instructions. However, an open research question concerns the inherent biases of trained models and their responses. For instance, if the data used to tune an LLM is dominantly written by persons with a specific political bias, we might expect generated answers to share this bias. Current research work seeks to de-bias such models, or suppress potentially biased answers. With this demonstration, we take a different view on biases in instruction-tuning: Rather than aiming to suppress them, we aim to make them explicit and transparent. To this end, we present OpinionGPT, a web demo in which users can ask questions and select all biases they wish to investigate. The demo will answer this question using a model fine-tuned on text representing each of the selected biases, allowing side-by-side comparison. To train the underlying model, we identified 11 different biases (political, geographic, gender, age) and derived an instruction-tuning corpus in which each answer was written by members of one of these demographics. This paper presents OpinionGPT, illustrates how we trained the bias-aware model and showcases the web application (available at https://opiniongpt.informatik.hu-berlin.de).

READ FULL TEXT
research
08/01/2023

Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias

Recent studies show that instruction tuning and learning from human feed...
research
08/28/2023

AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics

The introduction of ChatGPT and the subsequent improvement of Large Lang...
research
07/19/2023

Can Instruction Fine-Tuned Language Models Identify Social Bias through Prompting?

As the breadth and depth of language model applications continue to expa...
research
12/15/2021

Few-shot Instruction Prompts for Pretrained Language Models to Detect Social Biases

Detecting social bias in text is challenging due to nuance, subjectivity...
research
07/31/2023

Virtual Prompt Injection for Instruction-Tuned Large Language Models

We present Virtual Prompt Injection (VPI) for instruction-tuned Large La...
research
10/06/2020

UNQOVERing Stereotyping Biases via Underspecified Questions

While language embeddings have been shown to have stereotyping biases, h...
research
04/14/2023

The Self-Perception and Political Biases of ChatGPT

This contribution analyzes the self-perception and political biases of O...

Please sign up or login with your details

Forgot password? Click here to reset