Robustness of Bayesian Pool-based Active Learning Against Prior Misspecification

03/30/2016
by   Nguyen Viet Cuong, et al.
0

We study the robustness of active learning (AL) algorithms against prior misspecification: whether an algorithm achieves similar performance using a perturbed prior as compared to using the true prior. In both the average and worst cases of the maximum coverage setting, we prove that all α-approximate algorithms are robust (i.e., near α-approximate) if the utility is Lipschitz continuous in the prior. We further show that robustness may not be achieved if the utility is non-Lipschitz. This suggests we should use a Lipschitz utility for AL if robustness is required. For the minimum cost setting, we can also obtain a robustness result for approximate AL algorithms. Our results imply that many commonly used AL algorithms are robust against perturbed priors. We then propose the use of a mixture prior to alleviate the problem of prior misspecification. We analyze the robustness of the uniform mixture prior and show experimentally that it performs reasonably well in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2019

Bayesian Active Learning With Abstention Feedbacks

We study pool-based active learning with abstention feedbacks where a la...
research
05/23/2017

Bayesian Pool-based Active Learning With Abstention Feedbacks

We study pool-based active learning with abstention feedbacks, where a l...
research
10/10/2018

Combining Bayesian Optimization and Lipschitz Optimization

Bayesian optimization and Lipschitz optimization have developed alternat...
research
01/07/2021

The Effect of Prior Lipschitz Continuity on the Adversarial Robustness of Bayesian Neural Networks

It is desirable, and often a necessity, for machine learning models to b...
research
10/24/2022

Active Learning for Single Neuron Models with Lipschitz Non-Linearities

We consider the problem of active learning for single neuron models, als...
research
02/11/2020

Generalised Lipschitz Regularisation Equals Distributional Robustness

The problem of adversarial examples has highlighted the need for a theor...
research
02/21/2020

Towards Robust and Reproducible Active Learning Using Neural Networks

Active learning (AL) is a promising ML paradigm that has the potential t...

Please sign up or login with your details

Forgot password? Click here to reset