Abstractions of General Reinforcement Learning

12/26/2021
by   Sultan J. Majeed, et al.
0

The field of artificial intelligence (AI) is devoted to the creation of artificial decision-makers that can perform (at least) on par with the human counterparts on a domain of interest. Unlike the agents in traditional AI, the agents in artificial general intelligence (AGI) are required to replicate human intelligence in almost every domain of interest. Moreover, an AGI agent should be able to achieve this without (virtually any) further changes, retraining, or fine-tuning of the parameters. The real world is non-stationary, non-ergodic, and non-Markovian: we, humans, can neither revisit our past nor are the most recent observations sufficient statistics. Yet, we excel at a variety of complex tasks. Many of these tasks require longterm planning. We can associate this success to our natural faculty to abstract away task-irrelevant information from our overwhelming sensory experience. We make task-specific mental models of the world without much effort. Due to this ability to abstract, we can plan on a significantly compact representation of a task without much loss of performance. Not only this, we also abstract our actions to produce high-level plans: the level of action-abstraction can be anywhere between small muscle movements to a mental notion of "doing an action". It is natural to assume that any AGI agent competing with humans (at every plausible domain) should also have these abilities to abstract its experiences and actions. This thesis is an inquiry into the existence of such abstractions which aid efficient planing for a wide range of domains, and most importantly, these abstractions come with some optimality guarantees.

READ FULL TEXT
research
04/04/2022

Disentangling Abstraction from Statistical Pattern Matching in Human and Machine Learning

The ability to acquire abstract knowledge is a hallmark of human intelli...
research
12/12/2021

Representing Knowledge as Predictions (and State as Knowledge)

This paper shows how a single mechanism allows knowledge to be construct...
research
03/14/2012

Evolving Culture vs Local Minima

We propose a theory that relates difficulty of learning in deep architec...
research
02/19/2023

The Emerging Artificial Intelligence Protocol for Hierarchical Information Network

The recent development of artificial intelligence enables a machine to a...
research
11/18/2020

Language Acquisition Environment for Human-Level Artificial Intelligence

Despite recent advances in many application-specific domains, we do not ...
research
10/15/2019

How a minimal learning agent can infer the existence of unobserved variables in a complex environment

According to a mainstream position in contemporary cognitive science and...
research
05/03/2023

Plan, Eliminate, and Track – Language Models are Good Teachers for Embodied Agents

Pre-trained large language models (LLMs) capture procedural knowledge ab...

Please sign up or login with your details

Forgot password? Click here to reset