Interpreting Active Learning Methods Through Information Losses

02/25/2019
by   Brandon Foggo, et al.
0

We propose a new way of interpreting active learning methods by analyzing the information `lost' upon sampling a random variable. We use some recent analytical developments of these losses to formally prove that facility location methods reduce these losses under mild assumptions, and to derive a new data dependent bound on information losses that can be used to evaluate other active learning methods. We show that this new bound is extremely tight to experiment, and further show that the bound has a decent predictive power for classification accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2016

Asymptotic Analysis of Objectives based on Fisher Information in Active Learning

Obtaining labels can be costly and time-consuming. Active learning allow...
research
12/10/2016

Active Learning for Speech Recognition: the Power of Gradients

In training speech recognition systems, labeling audio clips can be expe...
research
07/27/2020

Deep Active Learning for Solvability Prediction in Power Systems

Traditional methods for solvability region analysis can only have inner ...
research
06/08/2017

Nuclear Discrepancy for Active Learning

Active learning algorithms propose which unlabeled objects should be que...
research
07/23/2021

MCDAL: Maximum Classifier Discrepancy for Active Learning

Recent state-of-the-art active learning methods have mostly leveraged Ge...
research
01/02/2023

Using Active Learning Methods to Strategically Select Essays for Automated Scoring

Research on automated essay scoring has become increasing important beca...
research
11/15/2021

Reducing the Long Tail Losses in Scientific Emulations with Active Learning

Deep-learning-based models are increasingly used to emulate scientific s...

Please sign up or login with your details

Forgot password? Click here to reset