Learning to Actively Learn: A Robust Approach

by   Jifan Zhang, et al.

This work proposes a procedure for designing algorithms for specific adaptive data collection tasks like active learning and pure-exploration multi-armed bandits. Unlike the design of traditional adaptive algorithms that rely on concentration of measure and careful analysis to justify the correctness and sample complexity of the procedure, our adaptive algorithm is learned via adversarial training over equivalence classes of problems derived from information theoretic lower bounds. In particular, a single adaptive learning algorithm is learned that competes with the best adaptive algorithm learned for each equivalence class. Our procedure takes as input just the available queries, set of hypotheses, loss function, and total query budget. This is in contrast to existing meta-learning work that learns an adaptive algorithm relative to an explicit, user-defined subset or prior distribution over problems which can be challenging to define and be mismatched to the instance encountered at test time. This work is particularly focused on the regime when the total query budget is very small, such as a few dozen, which is much smaller than those budgets typically considered by theoretically derived algorithms. We perform synthetic experiments to justify the stability and effectiveness of the training procedure, and then evaluate the method on tasks derived from real data including a noisy 20 Questions game and a joke recommendation task.


page 1

page 2

page 3

page 4


Information-Theoretic Generalization Bounds for Meta-Learning and Applications

Meta-learning, or "learning to learn", refers to techniques that infer a...

Pure-Exploration for Infinite-Armed Bandits with General Arm Reservoirs

This paper considers a multi-armed bandit game where the number of arms ...

An Information-Theoretic Analysis of the Impact of Task Similarity on Meta-Learning

Meta-learning aims at optimizing the hyperparameters of a model class or...

On Best-Arm Identification with a Fixed Budget in Non-Parametric Multi-Armed Bandits

We lay the foundations of a non-parametric theory of best-arm identifica...

Few Is Enough: Task-Augmented Active Meta-Learning for Brain Cell Classification

Deep Neural Networks (or DNNs) must constantly cope with distribution ch...

Meta-Learning via Learned Loss

We present a meta-learning approach based on learning an adaptive, high-...

Classification from Ambiguity Comparisons

Labeling data is an unavoidable pre-processing procedure for most machin...