Appropriateness is all you need!

04/27/2023
by   Hendrik Kempt, et al.
0

The strive to make AI applications "safe" has led to the development of safety-measures as the main or even sole normative requirement of their permissible use. Similar can be attested to the latest version of chatbots, such as chatGPT. In this view, if they are "safe", they are supposed to be permissible to deploy. This approach, which we call "safety-normativity", is rather limited in solving the emerging issues that chatGPT and other chatbots have caused thus far. In answering this limitation, in this paper we argue for limiting chatbots in the range of topics they can chat about according to the normative concept of appropriateness. We argue that rather than looking for "safety" in a chatbot's utterances to determine what they may and may not say, we ought to assess those utterances according to three forms of appropriateness: technical-discursive, social, and moral. We then spell out what requirements for chatbots follow from these forms of appropriateness to avoid the limits of previous accounts: positionality, acceptability, and value alignment (PAVA). With these in mind, we may be able to determine what a chatbot may and may not say. Lastly, one initial suggestion is to use challenge sets, specifically designed for appropriateness, as a validation method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2021

Is the Rush to Machine Learning Jeopardizing Safety? Results of a Survey

Machine learning (ML) is finding its way into safety-critical systems (S...
research
02/27/2023

Safety without alignment

Currently, the dominant paradigm in AI safety is alignment with human va...
research
10/11/2021

Safe Human-Interactive Control via Shielding

Ensuring safety for human-interactive robotics is important due to the p...
research
10/04/2022

When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment

AI systems are becoming increasingly intertwined with human life. In ord...
research
02/24/2020

Safe reinforcement learning for probabilistic reachability and safety specifications: A Lyapunov-based approach

Emerging applications in robotics and autonomous systems, such as autono...
research
09/05/2023

Provably safe systems: the only path to controllable AGI

We describe a path to humanity safely thriving with powerful Artificial ...

Please sign up or login with your details

Forgot password? Click here to reset