DeepAI AI Chat
Log In Sign Up

Settling the Sample Complexity of Single-parameter Revenue Maximization

04/10/2019
by   Chenghao Guo, et al.
0

This paper settles the sample complexity of single-parameter revenue maximization by showing matching upper and lower bounds, up to a poly-logarithmic factor, for all families of value distributions that have been considered in the literature. The upper bounds are unified under a novel framework, which builds on the strong revenue monotonicity by Devanur, Huang, and Psomas (STOC 2016), and an information theoretic argument. This is fundamentally different from the previous approaches that rely on either constructing an ϵ-net of the mechanism space, explicitly or implicitly via statistical learning theory, or learning an approximately accurate version of the virtual values. To our knowledge, it is the first time information theoretical arguments are used to show sample complexity upper bounds, instead of lower bounds. Our lower bounds are also unified under a meta construction of hard instances.

READ FULL TEXT
07/06/2019

Towards Testing Monotonicity of Distributions Over General Posets

In this work, we consider the sample complexity required for testing the...
05/17/2022

Strong Revenue (Non-)Monotonicity of Single-parameter Auctions

Consider Myerson's optimal auction with respect to an inaccurate prior, ...
11/27/2019

Generalizing Complex Hypotheses on Product Distributions: Auctions, Prophet Inequalities, and Pandora's Problem

This paper explores a theory of generalization for learning problems on ...
07/20/2021

Prior-Free Clock Auctions for Bidders with Interdependent Values

We study the problem of selling a good to a group of bidders with interd...
01/20/2022

Reproducibility in Learning

We introduce the notion of a reproducible algorithm in the context of le...
11/02/2021

Nearly Optimal Algorithms for Level Set Estimation

The level set estimation problem seeks to find all points in a domain X ...
05/27/2022

Learning to Control Linear Systems can be Hard

In this paper, we study the statistical difficulty of learning to contro...