The moral authority of ChatGPT

01/13/2023
by   Sebastian Krügel, et al.
0

ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it might improve the moral judgment and decisions of users, who often hold contradictory moral beliefs. Unfortunately, ChatGPT turns out highly inconsistent as a moral advisor. Nonetheless, it influences users' moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT threatens to corrupt rather than improves users' judgment. These findings raise the question of how to ensure the responsible use of ChatGPT and similar AI. Transparency is often touted but seems ineffective. We propose training to improve digital literacy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2021

Zombies in the Loop? People are Insensitive to the Transparency of AI-Powered Moral Advisors

Departing from the assumption that AI needs to be transparent to be trus...
research
04/16/2021

Enriching a Model's Notion of Belief using a Persistent Memory

Although pretrained language models (PTLMs) have been shown to contain s...
research
09/29/2021

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

Although pretrained language models (PTLMs) contain significant amounts ...
research
04/29/2022

Designing for Responsible Trust in AI Systems: A Communication Perspective

Current literature and public discourse on "trust in AI" are often focus...
research
09/24/2017

A Serious Game Design: Nudging Users' Memorability of Security Questions

Security questions are one of the techniques used to recover passwords. ...
research
05/23/2023

Language Models with Rationality

While large language models (LLMs) are proficient at question-answering ...

Please sign up or login with your details

Forgot password? Click here to reset