Comments on the Du-Kakade-Wang-Yang Lower Bounds

11/18/2019
by   Benjamin Van Roy, et al.
0

Du, Kakade, Wang, and Yang recently established intriguing lower bounds on sample complexity, which suggest that reinforcement learning with a misspecified representation is intractable. Another line of work, which centers around a statistic called the eluder dimension, establishes tractability of problems similar to those considered in the Du-Kakade-Wang-Yang paper. We compare these results and reconcile interpretations.

READ FULL TEXT
research
03/25/2019

Sample Complexity Lower Bounds for Linear System Identification

This paper establishes problem-specific sample complexity lower bounds f...
research
08/09/2016

On Lower Bounds for Regret in Reinforcement Learning

This is a brief technical note to clarify the state of lower bounds on r...
research
06/12/2019

Lower Bounds for the Happy Coloring Problems

In this paper, we study the Maximum Happy Vertices and the Maximum Happy...
research
02/24/2020

A Note on Echelon-Ferrers Construction

Echelon-Ferrers is one of important techniques to help researchers to im...
research
05/30/2023

Empirical Sufficiency Lower Bounds for Language Modeling with Locally-Bootstrapped Semantic Structures

In this work we build upon negative results from an attempt at language ...
research
03/27/2022

Constructive Separations and Their Consequences

For a complexity class C and language L, a constructive separation of L ...
research
02/03/2021

On the Approximation Power of Two-Layer Networks of Random ReLUs

This paper considers the following question: how well can depth-two ReLU...

Please sign up or login with your details

Forgot password? Click here to reset