Mimetic vs Anchored Value Alignment in Artificial Intelligence

10/25/2018
by   Tae Wan Kim, et al.
0

"Value alignment" (VA) is considered as one of the top priorities in AI research. Much of the existing research focuses on the "A" part and not the "V" part of "value alignment." This paper corrects that neglect by emphasizing the "value" side of VA and analyzes VA from the vantage point of requirements in value theory, in particular, of avoiding the "naturalistic fallacy"--a major epistemic caveat. The paper begins by isolating two distinct forms of VA: "mimetic" and "anchored." Then it discusses which VA approach better avoids the naturalistic fallacy. The discussion reveals stumbling blocks for VA approaches that neglect implications of the naturalistic fallacy. Such problems are more serious in mimetic VA since the mimetic process imitates human behavior that may or may not rise to the level of correct ethical behavior. Anchored VA, including hybrid VA, in contrast, holds more promise for future VA since it anchors alignment by normative concepts of intrinsic value.

READ FULL TEXT

page 3

page 4

research
07/02/2022

The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial

The value-alignment problem for artificial intelligence (AI) asks how we...
research
04/03/2017

Brief Notes on Hard Takeoff, Value Alignment, and Coherent Extrapolated Volition

I make some basic observations about hard takeoff, value alignment, and ...
research
02/02/2023

Goal Alignment: A Human-Aware Account of Value Alignment Problem

Value alignment problems arise in scenarios where the specified objectiv...
research
06/30/2019

Requisite Variety in Ethical Utility Functions for AI Value Alignment

Being a complex subject of major importance in AI Safety research, value...
research
05/09/2022

Aligned with Whom? Direct and social goals for AI systems

As artificial intelligence (AI) becomes more powerful and widespread, th...
research
08/23/2023

From Instructions to Intrinsic Human Values – A Survey of Alignment Goals for Big Models

Big models, exemplified by Large Language Models (LLMs), are models typi...
research
03/07/2018

Value Alignment, Fair Play, and the Rights of Service Robots

Ethics and safety research in artificial intelligence is increasingly fr...

Please sign up or login with your details

Forgot password? Click here to reset