An Information-Theoretic Framework for Unifying Active Learning Problems

12/19/2020
by   Quoc Phong Nguyen, et al.
11

This paper presents an information-theoretic framework for unifying active learning problems: level set estimation (LSE), Bayesian optimization (BO), and their generalized variant. We first introduce a novel active learning criterion that subsumes an existing LSE algorithm and achieves state-of-the-art performance in LSE problems with a continuous input domain. Then, by exploiting the relationship between LSE and BO, we design a competitive information-theoretic acquisition function for BO that has interesting connections to upper confidence bound and max-value entropy search (MES). The latter connection reveals a drawback of MES which has important implications on not only MES but also on other MES-based acquisition functions. Finally, our unifying information-theoretic framework can be applied to solve a generalized problem of LSE and BO involving multiple level sets in a data-efficient manner. We empirically evaluate the performance of our proposed algorithms using synthetic benchmark functions, a real-world dataset, and in hyperparameter tuning of machine learning models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2022

Unifying Approaches in Data Subset Selection via Fisher Information and Information-Theoretic Quantities

The mutual information between predictions and model parameters – also r...
research
06/22/2021

A Practical Unified Notation for Information-Theoretic Quantities in ML

Information theory is of importance to machine learning, but the notatio...
research
06/09/2023

Explaining Predictive Uncertainty with Information Theoretic Shapley Values

Researchers in explainable artificial intelligence have developed numero...
research
06/09/2022

Joint Entropy Search For Maximally-Informed Bayesian Optimization

Information-theoretic Bayesian optimization techniques have become popul...
research
12/19/2020

Top-k Ranking Bayesian Optimization

This paper presents a novel approach to top-k ranking Bayesian optimizat...
research
06/03/2019

MEMe: An Accurate Maximum Entropy Method for Efficient Approximations in Large-Scale Machine Learning

Efficient approximation lies at the heart of large-scale machine learnin...
research
11/21/2015

Gaussian Process Planning with Lipschitz Continuous Reward Functions: Towards Unifying Bayesian Optimization, Active Learning, and Beyond

This paper presents a novel nonmyopic adaptive Gaussian process planning...

Please sign up or login with your details

Forgot password? Click here to reset