DeepAI
Log In Sign Up

Practical, Provably-Correct Interactive Learning in the Realizable Setting: The Power of True Believers

11/09/2021
by   Julian Katz-Samuels, et al.
0

We consider interactive learning in the realizable setting and develop a general framework to handle problems ranging from best arm identification to active classification. We begin our investigation with the observation that agnostic algorithms cannot be minimax-optimal in the realizable setting. Hence, we design novel computationally efficient algorithms for the realizable setting that match the minimax lower bound up to logarithmic factors and are general-purpose, accommodating a wide variety of function classes including kernel methods, Hölder smooth functions, and convex functions. The sample complexities of our algorithms can be quantified in terms of well-known quantities like the extended teaching dimension and haystack dimension. However, unlike algorithms based directly on those combinatorial quantities, our algorithms are computationally efficient. To achieve computational efficiency, our algorithms sample from the version space using Monte Carlo "hit-and-run" algorithms instead of maintaining the version space explicitly. Our approach has two key strengths. First, it is simple, consisting of two unifying, greedy algorithms. Second, our algorithms have the capability to seamlessly leverage prior knowledge that is often available and useful in practice. In addition to our new theoretical results, we demonstrate empirically that our algorithms are competitive with Gaussian process UCB methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/21/2020

An Empirical Process Approach to the Union Bound: Practical Algorithms for Combinatorial and Linear Bandits

This paper proposes near-optimal algorithms for the pure-exploration lin...
05/06/2022

Efficient Minimax Optimal Estimators For Multivariate Convex Regression

We study the computational aspects of the task of multivariate convex re...
05/13/2021

Improved Algorithms for Agnostic Pool-based Active Classification

We consider active learning for binary classification in the agnostic po...
02/01/2020

Efficient and Robust Algorithms for Adversarial Linear Contextual Bandits

We consider an adversarial variant of the classic K-armed linear context...
02/23/2022

Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms

We consider the problem of quantizing a linear model learned from measur...
04/21/2016

Robust Estimators in High Dimensions without the Computational Intractability

We study high-dimensional distribution learning in an agnostic setting w...
10/05/2021

Dropout Q-Functions for Doubly Efficient Reinforcement Learning

Randomized ensemble double Q-learning (REDQ) has recently achieved state...