Value function approximation is important in modern reinforcement learni...
We study regret minimization for reinforcement learning (RL) in Latent M...
It is well-known that the worst-case minimax regret for sparse linear ba...
Recently there is a surge of interest in understanding the horizon-depen...
Designing provably efficient algorithms with general function approximat...
A fundamental question in the theory of reinforcement learning is: suppo...
This work introduces Bilinear Classes, a new structural framework, which...
In offline reinforcement learning (RL), we seek to utilize offline data ...
Offline reinforcement learning seeks to utilize offline (observational) ...
We study planning with submodular objective functions, where instead of
...
Reward-free reinforcement learning (RL) is a framework which is suitable...
Preference-based Reinforcement Learning (PbRL) replaces reward values in...
We give a row sampling algorithm for the quantile loss function with sam...
Value function approximation has demonstrated phenomenal empirical succe...
Learning to plan for long horizons is a central challenge in episodic
re...
We study how to use unsupervised learning for efficient exploration in
r...
The current paper studies the problem of agnostic Q-learning with functi...
We design a new provably efficient algorithm for episodic reinforcement
...
Recent research shows that for training with ℓ_2 loss, convolutional
neu...
A fundamental challenge in artificial intelligence is to build an agent ...
Modern deep learning methods provide an effective means to learn good
re...
We provide efficient algorithms for overconstrained linear regression
pr...
Recent research shows that the following two models are equivalent: (a)
...
Q-learning with function approximation is one of the most popular method...
We consider the communication complexity of a number of distributed
opti...
While graph kernels (GKs) are easy to train and enjoy provable theoretic...
We give the first dimensionality reduction methods for the overconstrain...
How well does a classic deep net architecture like AlexNet or VGG19 clas...
In the subspace sketch problem one is given an n× d matrix A with
O((nd)...
Recent works have cast some light on the mystery of why deep nets fit an...
The polynomial method from circuit complexity has been applied to severa...
An ℓ_p oblivious subspace embedding is a distribution over r × n
matrice...
We study the combinatorial pure exploration problem Best-Set in stochast...