Safer Conversational AI as a Source of User Delight

04/18/2023
by   Xiaoding Lu, et al.
0

This work explores the impact of moderation on users' enjoyment of conversational AI systems. While recent advancements in Large Language Models (LLMs) have led to highly capable conversational AIs that are increasingly deployed in real-world settings, there is a growing concern over AI safety and the need to moderate systems to encourage safe language and prevent harm. However, some users argue that current approaches to moderation limit the technology, compromise free expression, and limit the value delivered by the technology. This study takes an unbiased stance and shows that moderation does not necessarily detract from user enjoyment. Heavy handed moderation does seem to have a nefarious effect, but models that are moderated to be safer can lead to a better user experience. By deploying various conversational AIs in the Chai platform, the study finds that user retention can increase with a level of moderation and safe system design. These results demonstrate the importance of appropriately defining safety in models in a way that is both responsible and focused on serving users.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2023

The Chai Platform's AI Safety Framework

Chai empowers users to create and interact with customized chatbots, off...
research
06/19/2023

The Manipulation Problem: Conversational AI as a Threat to Epistemic Agency

The technology of Conversational AI has made significant advancements ov...
research
02/15/2023

Conversational AI-Powered Design: ChatGPT as Designer, User, and Product

The recent advancements in Large Language Models (LLMs), particularly co...
research
07/07/2021

Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling

Over the last several years, end-to-end neural conversational agents hav...
research
12/19/2022

Foveate, Attribute, and Rationalize: Towards Safe and Trustworthy AI

Users' physical safety is an increasing concern as the market for intell...
research
08/31/2021

The five Is: Key principles for interpretable and safe conversational AI

In this position paper, we present five key principles, namely interpret...

Please sign up or login with your details

Forgot password? Click here to reset