Thinking Fast and Slow in AI: the Role of Metacognition

AI systems have seen dramatic advancement in recent years, bringing many applications that pervade our everyday life. However, we are still mostly seeing instances of narrow AI: many of these recent developments are typically focused on a very limited set of competencies and goals, e.g., image interpretation, natural language processing, classification, prediction, and many others. Moreover, while these successes can be accredited to improved algorithms and techniques, they are also tightly linked to the availability of huge datasets and computational power. State-of-the-art AI still lacks many capabilities that would naturally be included in a notion of (human) intelligence. We argue that a better study of the mechanisms that allow humans to have these capabilities can help us understand how to imbue AI systems with these competencies. We focus especially on D. Kahneman's theory of thinking fast and slow, and we propose a multi-agent AI architecture where incoming problems are solved by either system 1 (or "fast") agents, that react by exploiting only past experience, or by system 2 (or "slow") agents, that are deliberately activated when there is the need to reason and search for optimal solutions beyond what is expected from the system 1 agent. Both kinds of agents are supported by a model of the world, containing domain knowledge about the environment, and a model of "self", containing information about past actions of the system and solvers' skills.

READ FULL TEXT
research
03/07/2023

Fast and Slow Planning

The concept of Artificial Intelligence has gained a lot of attention ove...
research
01/18/2022

Combining Fast and Slow Thinking for Human-like and Efficient Navigation in Constrained Environments

Current AI systems lack several important human capabilities, such as ad...
research
05/07/2020

A Proposal for Intelligent Agents with Episodic Memory

In the future we can expect that artificial intelligent agents, once dep...
research
04/21/2023

Improving Grounded Language Understanding in a Collaborative Environment by Interacting with Agents Through Help Feedback

Many approaches to Natural Language Processing (NLP) tasks often treat t...
research
12/15/2020

Open Problems in Cooperative AI

Problems of cooperation–in which agents seek ways to jointly improve the...
research
11/16/2022

Multi-Timescale Modeling of Human Behavior

In recent years, the role of artificially intelligent (AI) agents has ev...
research
07/14/2023

Value-based Fast and Slow AI Nudging

Nudging is a behavioral strategy aimed at influencing people's thoughts ...

Please sign up or login with your details

Forgot password? Click here to reset