Active Model Aggregation via Stochastic Mirror Descent

03/28/2015
by   Ravi Ganti, et al.
0

We consider the problem of learning convex aggregation of models, that is as good as the best convex aggregation, for the binary classification problem. Working in the stream based active learning setting, where the active learner has to make a decision on-the-fly, if it wants to query for the label of the point currently seen in the stream, we propose a stochastic-mirror descent algorithm, called SMD-AMA, with entropy regularization. We establish an excess risk bounds for the loss of the convex aggregate returned by SMD-AMA to be of the order of O(√((M)/T^1-μ)), where μ∈ [0,1) is an algorithm dependent parameter, that trades-off the number of labels queried, and excess risk.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2016

Multi-class classification: mirror descent approach

We consider the problem of multi-class classification and a stochastic o...
research
06/01/2019

Active Learning for Binary Classification with Abstention

We construct and analyze active learning algorithms for the problem of b...
research
05/29/2019

The Label Complexity of Active Learning from Observational Data

Counterfactual learning from observational data involves learning a clas...
research
11/15/2021

Margin-Independent Online Multiclass Learning via Convex Geometry

We consider the problem of multi-class classification, where a stream of...
research
11/19/2013

Beating the Minimax Rate of Active Learning with Prior Knowledge

Active learning refers to the learning protocol where the learner is all...
research
11/08/2011

UPAL: Unbiased Pool Based Active Learning

In this paper we address the problem of pool based active learning, and ...
research
09/10/2018

Learning Time Dependent Choice

We explore questions dealing with the learnability of models of choice o...

Please sign up or login with your details

Forgot password? Click here to reset