Mammalian Value Systems

07/28/2016
by   Gopal P. Sarma, et al.
0

Characterizing human values is a topic deeply interwoven with the sciences, humanities, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest sophisticated AI systems becoming widespread and responsible for managing many aspects of the modern world, from preemptively planning users' travel schedules and logistics, to fully autonomous vehicles, to domestic robots assisting in daily living. The extrapolation of these trends has been most forcefully described in the context of a hypothetical "intelligence explosion," in which the capabilities of an intelligent software agent would rapidly increase due to the presence of feedback loops unavailable to biological organisms. The possibility of superintelligent agents, or simply the widespread deployment of sophisticated, autonomous AI systems, highlights an important theoretical problem: the need to separate the cognitive and rational capacities of an agent from the fundamental goal structure, or value system, which constrains and guides the agent's actions. The "value alignment problem" is to specify a goal structure for autonomous agents compatible with human values. In this brief article, we suggest that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian kingdom provide important conceptual foundations relevant to describing human values. We argue that the notion of "mammalian value systems" points to a potential avenue for fundamental research in AI safety and AI ethics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2017

AI Safety and Reproducibility: Establishing Robust Foundations for the Neuroscience of Human Values

We propose the creation of a systematic effort to identify and replicate...
research
05/12/2020

Dynamic Models Applied to Value Learning in Artificial Intelligence

Experts in Artificial Intelligence (AI) development predict that advance...
research
05/16/2017

Rise of the humanbot

The accelerated path of technological development, particularly at the i...
research
12/08/2017

AI Safety and Reproducibility: Establishing Robust Foundations for the Neuropsychology of Human Values

We propose the creation of a systematic effort to identify and replicate...
research
12/18/2020

Hierarchical principles of embodied reinforcement learning: A review

Cognitive Psychology and related disciplines have identified several cri...
research
07/15/2017

AI Challenges in Human-Robot Cognitive Teaming

Among the many anticipated roles for robots in the future is that of bei...
research
07/20/2017

Pragmatic-Pedagogic Value Alignment

For an autonomous system to provide value (e.g., to customers, designers...

Please sign up or login with your details

Forgot password? Click here to reset