OMNI: Open-endedness via Models of human Notions of Interestingness

06/02/2023
by   Jenny Zhang, et al.
3

Open-ended algorithms aim to learn new, interesting behaviors forever. That requires a vast environment search space, but there are thus infinitely many possible tasks. Even after filtering for tasks the current agent can learn (i.e., learning progress), countless learnable yet uninteresting tasks remain (e.g., minor variations of previously learned tasks). An Achilles Heel of open-endedness research is the inability to quantify (and thus prioritize) tasks that are not just learnable, but also interesting (e.g., worthwhile and novel). We propose solving this problem by Open-endedness via Models of human Notions of Interestingness (OMNI). The insight is that we can utilize large (language) models (LMs) as a model of interestingness (MoI), because they already internalize human concepts of interestingness from training on vast amounts of human-generated data, where humans naturally write about what they find interesting or boring. We show that LM-based MoIs improve open-ended learning by focusing on tasks that are both learnable and interesting, outperforming baselines based on uniform task sampling or learning progress alone. This approach has the potential to dramatically advance the ability to intelligently select which tasks to focus on next (i.e., auto-curricula), and could be seen as AI selecting its own next task to learn, facilitating self-improving AI and AI-Generating Algorithms.

READ FULL TEXT

page 9

page 20

page 23

page 25

page 26

page 30

page 31

page 33

research
10/08/2019

Can We Distinguish Machine Learning from Human Learning?

What makes a task relatively more or less difficult for a machine compar...
research
03/11/2023

Mapping the Design Space of Interactions in Human-AI Text Co-creation Tasks

Large Language Models (LLMs) have demonstrated impressive text generatio...
research
11/27/2018

Combining Deep Learning and Qualitative Spatial Reasoning to Learn Complex Structures from Sparse Examples with Noise

Many modern machine learning approaches require vast amounts of training...
research
10/21/2021

Self-Initiated Open World Learning for Autonomous AI Agents

As more and more AI agents are used in practice, it is time to think abo...
research
11/04/2022

Measuring Progress on Scalable Oversight for Large Language Models

Developing safe and useful general-purpose AI systems will require us to...
research
06/12/2022

Self-critiquing models for assisting human evaluators

We fine-tune large language models to write natural language critiques (...
research
12/21/2021

Provable Hierarchical Lifelong Learning with a Sketch-based Modular Architecture

We propose a modular architecture for the lifelong learning of hierarchi...

Please sign up or login with your details

Forgot password? Click here to reset