Exploration and Incentives in Reinforcement Learning

02/28/2021
by   Max Simchowitz, et al.
0

How do you incentivize self-interested agents to explore when they prefer to exploit ? We consider complex exploration problems, where each agent faces the same (but unknown) MDP. In contrast with traditional formulations of reinforcement learning, agents control the choice of policies, whereas an algorithm can only issue recommendations. However, the algorithm controls the flow of information, and can incentivize the agents to explore via information asymmetry. We design an algorithm which explores all reachable states in the MDP. We achieve provable guarantees similar to those for incentivizing exploration in static, stateless exploration problems studied previously.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2021

A Survey of Exploration Methods in Reinforcement Learning

Exploration is an essential component of reinforcement learning algorith...
research
11/29/2016

Exploration for Multi-task Reinforcement Learning with Deep Generative Models

Exploration in multi-task reinforcement learning is critical in training...
research
09/07/2021

On the impact of MDP design for Reinforcement Learning agents in Resource Management

The recent progress in Reinforcement Learning applications to Resource M...
research
03/10/2020

Explore and Exploit with Heterotic Line Bundle Models

We use deep reinforcement learning to explore a class of heterotic SU(5)...
research
07/01/2019

Designing Deep Reinforcement Learning for Human Parameter Exploration

Software tools for generating digital sound often present users with hig...
research
06/14/2021

Targeted Data Acquisition for Evolving Negotiation Agents

Successful negotiators must learn how to balance optimizing for self-int...
research
03/25/2021

Improving Playtesting Coverage via Curiosity Driven Reinforcement Learning Agents

As modern games continue growing both in size and complexity, it has bec...

Please sign up or login with your details

Forgot password? Click here to reset