The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial

07/02/2022
by   Travis LaCroix, et al.
0

The value-alignment problem for artificial intelligence (AI) asks how we can ensure that the 'values' (i.e., objective functions) of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication (natural language) is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems; or, more loftily, designing robustly beneficial or ethical artificial agents.

READ FULL TEXT
research
04/11/2022

Metaethical Perspectives on 'Benchmarking' AI Ethics

Benchmarks are seen as the cornerstone for measuring technical progress ...
research
03/26/2021

Alignment of Language Agents

For artificial intelligence to be beneficial to humans the behaviour of ...
research
10/25/2018

Mimetic vs Anchored Value Alignment in Artificial Intelligence

"Value alignment" (VA) is considered as one of the top priorities in AI ...
research
06/26/2019

Norms for Beneficial A.I.: A Computational Analysis of the Societal Value Alignment Problem

The rise of artificial intelligence (A.I.) based systems has the potenti...
research
12/08/2017

AI Safety and Reproducibility: Establishing Robust Foundations for the Neuropsychology of Human Values

We propose the creation of a systematic effort to identify and replicate...
research
11/16/2017

From Algorithmic Black Boxes to Adaptive White Boxes: Declarative Decision-Theoretic Ethical Programs as Codes of Ethics

Ethics of algorithms is an emerging topic in various disciplines such as...
research
09/16/2020

Value Alignment Equilibrium in Multiagent Systems

Value alignment has emerged in recent years as a basic principle to prod...

Please sign up or login with your details

Forgot password? Click here to reset