Algorithmic Connections Between Active Learning and Stochastic Convex Optimization

05/15/2015
by   Aaditya Ramdas, et al.
0

Interesting theoretical associations have been established by recent papers between the fields of active learning and stochastic convex optimization due to the common role of feedback in sequential querying mechanisms. In this paper, we continue this thread in two parts by exploiting these relations for the first time to yield novel algorithms in both fields, further motivating the study of their intersection. First, inspired by a recent optimization algorithm that was adaptive to unknown uniform convexity parameters, we present a new active learning algorithm for one-dimensional thresholds that can yield minimax rates by adapting to unknown noise parameters. Next, we show that one can perform d-dimensional stochastic minimization of smooth uniformly convex functions when only granted oracle access to noisy gradient signs along any coordinate instead of real-valued gradients, by using a simple randomized coordinate descent procedure where each line search can be solved by 1-dimensional active learning, provably achieving the same error convergence rate as having the entire real-valued gradient. Combining these two parts yields an algorithm that solves stochastic convex optimization of uniformly convex and smooth functions using only noisy gradient signs by repeatedly performing active learning, achieves optimal rates and is adaptive to all unknown convexity and smoothness parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2012

Optimal rates for first-order stochastic convex optimization under Tsybakov noise condition

We focus on the problem of minimizing a convex function f over a convex ...
research
11/25/2017

An Adaptive Strategy for Active Learning with Smooth Decision Boundary

We present the first adaptive strategy for active learning in the settin...
research
02/23/2022

Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance

We study stochastic convex optimization under infinite noise variance. S...
research
03/11/2020

Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex Optimization

A framework based on iterative coordinate minimization (CM) is developed...
research
11/04/2015

Dictionary descent in optimization

The problem of convex optimization is studied. Usually in convex optimiz...
research
05/26/2016

FLAG n' FLARE: Fast Linearly-Coupled Adaptive Gradient Methods

We consider first order gradient methods for effectively optimizing a co...
research
05/26/2022

Active Labeling: Streaming Stochastic Gradients

The workhorse of machine learning is stochastic gradient descent. To acc...

Please sign up or login with your details

Forgot password? Click here to reset