DeepAI AI Chat
Log In Sign Up

Comments on the Du-Kakade-Wang-Yang Lower Bounds

11/18/2019
by   Benjamin Van Roy, et al.
0

Du, Kakade, Wang, and Yang recently established intriguing lower bounds on sample complexity, which suggest that reinforcement learning with a misspecified representation is intractable. Another line of work, which centers around a statistic called the eluder dimension, establishes tractability of problems similar to those considered in the Du-Kakade-Wang-Yang paper. We compare these results and reconcile interpretations.

READ FULL TEXT
03/25/2019

Sample Complexity Lower Bounds for Linear System Identification

This paper establishes problem-specific sample complexity lower bounds f...
08/09/2016

On Lower Bounds for Regret in Reinforcement Learning

This is a brief technical note to clarify the state of lower bounds on r...
06/12/2019

Lower Bounds for the Happy Coloring Problems

In this paper, we study the Maximum Happy Vertices and the Maximum Happy...
02/24/2020

A Note on Echelon-Ferrers Construction

Echelon-Ferrers is one of important techniques to help researchers to im...
04/05/2012

Distribution-Dependent Sample Complexity of Large Margin Learning

We obtain a tight distribution-specific characterization of the sample c...
03/27/2022

Constructive Separations and Their Consequences

For a complexity class C and language L, a constructive separation of L ...
06/01/2019

Graph-based Discriminators: Sample Complexity and Expressiveness

A basic question in learning theory is to identify if two distributions ...