DeepAI
Log In Sign Up

Universal Empathy and Ethical Bias for Artificial General Intelligence

08/03/2013
by   Alexey Potapov, et al.
0

Rational agents are usually built to maximize rewards. However, AGI agents can find undesirable ways of maximizing any prior reward function. Therefore value learning is crucial for safe AGI. We assume that generalized states of the world are valuable - not rewards themselves, and propose an extension of AIXI, in which rewards are used only to bootstrap hierarchical value learning. The modified AIXI agent is considered in the multi-agent environment, where other agents can be either humans or other "mature" agents, which values should be revealed and adopted by the "infant" AGI agent. General framework for designing such empathic agent with ethical bias is proposed also as an extension of the universal intelligence model. Moreover, we perform experiments in the simple Markov environment, which demonstrate feasibility of our approach to value learning in safe AGI.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/10/2020

Learning to Incentivize Other Learning Agents

The challenge of developing powerful and general Reinforcement Learning ...
03/29/2021

Shaping Advice in Deep Multi-Agent Reinforcement Learning

Multi-agent reinforcement learning involves multiple agents interacting ...
04/21/2022

Path-Specific Objectives for Safer Agent Incentives

We present a general framework for training safe agents whose naive ince...
12/04/2017

A path to AI

To build a safe system that would replicate and perhaps transcend human-...
07/27/2011

Time Consistent Discounting

A possibly immortal agent tries to maximise its summed discounted reward...
07/03/2021

QKSA: Quantum Knowledge Seeking Agent

In this article we present the motivation and the core thesis towards th...
01/13/2016

Analysis of Algorithms and Partial Algorithms

We present an alternative methodology for the analysis of algorithms, ba...