Intrinsically motivated graph exploration using network theories of human curiosity

07/11/2023
by   Shubhankar P. Patankar, et al.
0

Intrinsically motivated exploration has proven useful for reinforcement learning, even without additional extrinsic rewards. When the environment is naturally represented as a graph, how to guide exploration best remains an open question. In this work, we propose a novel approach for exploring graph-structured data motivated by two theories of human curiosity: the information gap theory and the compression progress theory. The theories view curiosity as an intrinsic motivation to optimize for topological features of subgraphs induced by the visited nodes in the environment. We use these proposed features as rewards for graph neural-network-based reinforcement learning. On multiple classes of synthetically generated graphs, we find that trained agents generalize to larger environments and to longer exploratory walks than are seen during training. Our method computes more efficiently than the greedy evaluation of the relevant topological properties. The proposed intrinsic motivations bear particular relevance for recommender systems. We demonstrate that curiosity-based recommendations are more predictive of human behavior than PageRank centrality for several real-world graph datasets, including MovieLens, Amazon Books, and Wikispeedia.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2022

Collaborative Training of Heterogeneous Reinforcement Learning Agents in Environments with Sparse Rewards: What and When to Share?

In the early stages of human life, babies develop their skills by explor...
research
09/14/2021

Focus on Impact: Indoor Exploration with Intrinsic Motivation

Exploration of indoor environments has recently experienced a significan...
research
10/01/2022

Deep Intrinsically Motivated Exploration in Continuous Control

In continuous control, exploration is often performed through undirected...
research
05/23/2022

An Evaluation Study of Intrinsic Motivation Techniques applied to Reinforcement Learning over Hard Exploration Environments

In the last few years, the research activity around reinforcement learni...
research
07/27/2020

Noisy Agents: Self-supervised Exploration by Predicting Auditory Events

Humans integrate multiple sensory modalities (e.g. visual and audio) to ...
research
07/27/2020

Fast active learning for pure exploration in reinforcement learning

Realistic environments often provide agents with very limited feedback. ...
research
12/01/2019

Affect-based Intrinsic Rewards for Learning General Representations

Positive affect has been linked to increased interest, curiosity and sat...

Please sign up or login with your details

Forgot password? Click here to reset