Optimal Gradient-based Algorithms for Non-concave Bandit Optimization

07/09/2021
by   Baihe Huang, et al.
10

Bandit problems with linear or concave reward have been extensively studied, but relatively few works have studied bandits with non-concave reward. This work considers a large family of bandit problems where the unknown underlying reward function is non-concave, including the low-rank generalized linear bandit problems and two-layer neural network with polynomial activation bandit problem. For the low-rank generalized linear bandit problem, we provide a minimax-optimal algorithm in the dimension, refuting both conjectures in [LMT21, JWWN19]. Our algorithms are based on a unified zeroth-order optimization paradigm that applies in great generality and attains optimal rates in several structured polynomial settings (in the dimension). We further demonstrate the applicability of our algorithms in RL in the generative model setting, resulting in improved sample complexity over prior approaches. Finally, we show that the standard optimistic algorithms (e.g., UCB) are sub-optimal by dimension factors. In the neural net setting (with polynomial activation functions) with noiseless reward, we provide a bandit algorithm with sample complexity equal to the intrinsic algebraic dimension. Again, we show that optimistic approaches have worse sample complexity, polynomial in the extrinsic dimension (which could be exponentially worse in the polynomial degree).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/05/2021

Adversarial Combinatorial Bandits with General Non-linear Reward Functions

In this paper we study the adversarial combinatorial bandit with a known...
research
05/21/2018

Effective Dimension of Exp-concave Optimization

We investigate the role of the effective (a.k.a. statistical) dimension ...
research
07/14/2021

Going Beyond Linear RL: Sample Efficient Neural Function Approximation

Deep Reinforcement Learning (RL) powered by neural net approximation of ...
research
10/31/2022

Private optimization in the interpolation regime: faster rates and hardness results

In non-private stochastic convex optimization, stochastic gradient metho...
research
02/08/2021

Provable Model-based Nonlinear Bandit and Reinforcement Learning: Shelve Optimism, Embrace Virtual Curvature

This paper studies model-based bandit and reinforcement learning (RL) wi...
research
11/03/2020

Episodic Linear Quadratic Regulators with Low-rank Transitions

Linear Quadratic Regulators (LQR) achieve enormous successful real-world...
research
02/14/2012

Fractional Moments on Bandit Problems

Reinforcement learning addresses the dilemma between exploration to find...

Please sign up or login with your details

Forgot password? Click here to reset