What does ChatGPT return about human values? Exploring value bias in ChatGPT using a descriptive value theory

04/07/2023
by   Ronald Fischer, et al.
0

There has been concern about ideological basis and possible discrimination in text generated by Large Language Models (LLMs). We test possible value biases in ChatGPT using a psychological value theory. We designed a simple experiment in which we used a number of different probes derived from the Schwartz basic value theory (items from the revised Portrait Value Questionnaire, the value type definitions, value names). We prompted ChatGPT via the OpenAI API repeatedly to generate text and then analyzed the generated corpus for value content with a theory-driven value dictionary using a bag of words approach. Overall, we found little evidence of explicit value bias. The results showed sufficient construct and discriminant validity for the generated text in line with the theoretical predictions of the psychological model, which suggests that the value content was carried through into the outputs with high fidelity. We saw some merging of socially oriented values, which may suggest that these values are less clearly differentiated at a linguistic level or alternatively, this mixing may reflect underlying universal human motivations. We outline some possible applications of our findings for both applications of ChatGPT for corporate usage and policy making as well as future research avenues. We also highlight possible implications of this relatively high-fidelity replication of motivational content using a linguistic model for the theorizing about human values.

READ FULL TEXT

page 7

page 25

research
10/14/2022

Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values

Many NLP classification tasks, such as sexism/racism detection or toxici...
research
06/12/2023

On the Amplification of Linguistic Bias through Unintentional Self-reinforcement Learning by Generative Language Models – A Perspective

Generative Language Models (GLMs) have the potential to significantly sh...
research
02/08/2023

Noise2Music: Text-conditioned Music Generation with Diffusion Models

We introduce Noise2Music, where a series of diffusion models is trained ...
research
05/13/2023

GPT-Sentinel: Distinguishing Human and ChatGPT Generated Content

This paper presents a novel approach for detecting ChatGPT-generated vs....
research
03/15/2022

The Ghost in the Machine has an American accent: value conflict in GPT-3

The alignment problem in the context of large language models must consi...
research
09/06/2023

Framework-Based Qualitative Analysis of Free Responses of Large Language Models: Algorithmic Fidelity

Today, using Large-scale generative Language Models (LLMs) it is possibl...
research
08/08/2022

Debiased Large Language Models Still Associate Muslims with Uniquely Violent Acts

Recent work demonstrates a bias in the GPT-3 model towards generating vi...

Please sign up or login with your details

Forgot password? Click here to reset