Brief Notes on Hard Takeoff, Value Alignment, and Coherent Extrapolated Volition

04/03/2017
by   Gopal P. Sarma, et al.
0

I make some basic observations about hard takeoff, value alignment, and coherent extrapolated volition, concepts which have been central in analyses of superintelligent AI systems.

READ FULL TEXT

page 1

page 2

page 3

research
10/25/2018

Mimetic vs Anchored Value Alignment in Artificial Intelligence

"Value alignment" (VA) is considered as one of the top priorities in AI ...
research
01/16/2023

AI Alignment Dialogues: An Interactive Approach to AI Alignment in Support Agents

AI alignment is about ensuring AI systems only pursue goals and activiti...
research
03/20/2013

An Approximate Nonmyopic Computation for Value of Information

Value-of-information analyses provide a straightforward means for select...
research
02/02/2023

Goal Alignment: A Human-Aware Account of Value Alignment Problem

Value alignment problems arise in scenarios where the specified objectiv...
research
09/16/2020

Value Alignment Equilibrium in Multiagent Systems

Value alignment has emerged in recent years as a basic principle to prod...
research
03/04/2014

A proof challenge: multiple alignment and information compression

These notes pose a "proof challenge": a proof, or disproof, of the propo...
research
01/01/2019

Personal Universes: A Solution to the Multi-Agent Value Alignment Problem

AI Safety researchers attempting to align values of highly capable intel...

Please sign up or login with your details

Forgot password? Click here to reset