Reinforcement Learning of Simple Indirect Mechanisms

by   Gianluca Brero, et al.

We introduce the use of reinforcement learning for indirect mechanisms, working with the existing class of sequential price mechanisms, which generalizes both serial dictatorship and posted price mechanisms and essentially characterizes all strongly obviously strategyproof mechanisms. Learning an optimal mechanism within this class forms a partially-observable Markov decision process. We provide rigorous conditions for when this class of mechanisms is more powerful than simpler static mechanisms, for sufficiency or insufficiency of observation statistics for learning, and for the necessity of complex (deep) policies. We show that our approach can learn optimal or near-optimal mechanisms in several experimental settings.


page 1

page 2

page 3

page 4


The Pareto Frontier of Inefficiency in Mechanism Design

We study the trade-off between the Price of Anarchy (PoA) and the Price ...

Near-optimal Optimistic Reinforcement Learning using Empirical Bernstein Inequalities

We study model-based reinforcement learning in an unknown finite communi...

Belief Tree Search for Active Object Recognition

Active Object Recognition (AOR) has been approached as an unsupervised l...

The Partially Observable History Process

We introduce the partially observable history process (POHP) formalism f...

Optimistic MLE – A Generic Model-based Algorithm for Partially Observable Sequential Decision Making

This paper introduces a simple efficient learning algorithms for general...

Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information

In real-world reinforcement learning applications the learner's observat...

Learning Implicit Communication Strategies for the Purpose of Illicit Collusion

Winner-take-all dynamics are prevalent throughout the human and natural ...