The dangers in algorithms learning humans' values and irrationalities

02/28/2022
by   Rebecca Gorman, et al.
0

For an artificial intelligence (AI) to be aligned with human values (or human preferences), it must first learn those values. AI systems that are trained on human behavior, risk miscategorising human irrationalities as human values – and then optimising for these irrationalities. Simply learning human values still carries risks: AI learning them will inevitably also gain information on human irrationalities and human behaviour/policy. Both of these can be dangerous: knowing human policy allows an AI to become generically more powerful (whether it is partially aligned or not aligned at all), while learning human irrationalities allows it to exploit humans without needing to provide value in return. This paper analyses the danger in developing artificial intelligence that learns about human irrationalities and human policy, and constructs a model recommendation system with various levels of information about human biases, human policy, and human values. It concludes that, whatever the power and knowledge of the AI, it is more dangerous for it to know human irrationalities than human values. Thus it is better for the AI to learn human values directly, rather than learning human biases and then deducing values from behaviour.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2022

Human-centered mechanism design with Democratic AI

Building artificial intelligence (AI) that aligns with human values is a...
research
06/25/2022

Aligning Artificial Intelligence with Humans through Public Policy

Given that Artificial Intelligence (AI) increasingly permeates our lives...
research
04/30/2019

Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour

Artificial Intelligence (AI) applications are being used to predict and ...
research
12/21/2022

Circumventing interpretability: How to defeat mind-readers

The increasing capabilities of artificial intelligence (AI) systems make...
research
07/30/2020

Modelos dinâmicos aplicados à aprendizagem de valores em inteligência artificial

Experts in Artificial Intelligence (AI) development predict that advance...
research
04/03/2022

Best-Response Bayesian Reinforcement Learning with Bayes-adaptive POMDPs for Centaurs

Centaurs are half-human, half-AI decision-makers where the AI's goal is ...
research
07/14/2023

Value-based Fast and Slow AI Nudging

Nudging is a behavioral strategy aimed at influencing people's thoughts ...

Please sign up or login with your details

Forgot password? Click here to reset