Exponentiated Gradient Exploration for Active Learning

08/10/2014
by   Djallel Bouneffouf, et al.
0

Active learning strategies respond to the costly labelling task in a supervised classification by selecting the most useful unlabelled examples in training a predictive model. Many conventional active learning algorithms focus on refining the decision boundary, rather than exploring new regions that can be more informative. In this setting, we propose a sequential algorithm named EG-Active that can improve any Active learning algorithm by an optimal random exploration. Experimental results show a statistically significant and appreciable improvement in the performance of our new approach over the existing active feedback methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2019

Disentanglement based Active Learning

We propose Disentanglement based Active Learning (DAL), a new active lea...
research
04/23/2020

Active Learning for Gaussian Process Considering Uncertainties with Application to Shape Control of Composite Fuselage

In the machine learning domain, active learning is an iterative data sel...
research
05/27/2016

Asymptotic Analysis of Objectives based on Fisher Information in Active Learning

Obtaining labels can be costly and time-consuming. Active learning allow...
research
11/02/2022

Neural Active Learning on Heteroskedastic Distributions

Models that can actively seek out the best quality training data hold th...
research
08/02/2016

Can Active Learning Experience Be Transferred?

Active learning is an important machine learning problem in reducing the...
research
11/16/2020

Sampling Approach Matters: Active Learning for Robotic Language Acquisition

Ordering the selection of training data using active learning can lead t...
research
11/19/2020

Finding the Homology of Decision Boundaries with Active Learning

Accurately and efficiently characterizing the decision boundary of class...

Please sign up or login with your details

Forgot password? Click here to reset