Artificial Open World for Evaluating AGI: a Conceptual Design

06/02/2022
by   Bowen Xu, et al.
0

How to evaluate Artificial General Intelligence (AGI) is a critical problem that is discussed and unsolved for a long period. In the research of narrow AI, this seems not a severe problem, since researchers in that field focus on some specific problems as well as one or some aspects of cognition, and the criteria for evaluation are explicitly defined. By contrast, an AGI agent should solve problems that are never-encountered by both agents and developers. However, once a developer tests and debugs the agent with a problem, the never-encountered problem becomes the encountered problem, as a result, the problem is solved by the developers to some extent, exploiting their experience, rather than the agents. This conflict, as we call the trap of developers' experience, leads to that this kind of problems is probably hard to become an acknowledged criterion. In this paper, we propose an evaluation method named Artificial Open World, aiming to jump out of the trap. The intuition is that most of the experience in the actual world should not be necessary to be applied to the artificial world, and the world should be open in some sense, such that developers are unable to perceive the world and solve problems by themselves before testing, though after that they are allowed to check all the data. The world is generated in a similar way as the actual world, and a general form of problems is proposed. A metric is proposed aiming to quantify the progress of research. This paper describes the conceptual design of the Artificial Open World, though the formalization and the implementation are left to the future.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2015

Open Ended Intelligence: The individuation of Intelligent Agents

Artificial General Intelligence is a field of research aiming to distill...
research
07/30/2020

Modelos dinâmicos aplicados à aprendizagem de valores em inteligência artificial

Experts in Artificial Intelligence (AI) development predict that advance...
research
04/13/2021

Agents for Automated User Experience Testing

The automation of functional testing in software has allowed developers ...
research
09/02/2020

Problems in AI research and how the SP System may help to solve them

This paper describes problems in AI research and how the SP System may h...
research
01/27/2023

Polycraft World AI Lab (PAL): An Extensible Platform for Evaluating Artificial Intelligence Agents

As artificial intelligence research advances, the platforms used to eval...
research
12/04/2017

A path to AI

To build a safe system that would replicate and perhaps transcend human-...
research
08/07/2023

Bridging Trustworthiness and Open-World Learning: An Exploratory Neural Approach for Enhancing Interpretability, Generalization, and Robustness

As researchers strive to narrow the gap between machine intelligence and...

Please sign up or login with your details

Forgot password? Click here to reset