Approximation Vector Machines for Large-scale Online Learning

04/22/2016
by   Trung Le, et al.
0

One of the most challenging problems in kernel online learning is to bound the model size and to promote the model sparsity. Sparse models not only improve computation and memory usage, but also enhance the generalization capacity, a principle that concurs with the law of parsimony. However, inappropriate sparsity modeling may also significantly degrade the performance. In this paper, we propose Approximation Vector Machine (AVM), a model that can simultaneously encourage the sparsity and safeguard its risk in compromising the performance. When an incoming instance arrives, we approximate this instance by one of its neighbors whose distance to it is less than a predefined threshold. Our key intuition is that since the newly seen instance is expressed by its nearby neighbor the optimal performance can be analytically formulated and maintained. We develop theoretical foundations to support this intuition and further establish an analysis to characterize the gap between the approximation and optimal solutions. This gap crucially depends on the frequency of approximation and the predefined threshold. We perform the convergence analysis for a wide spectrum of loss functions including Hinge, smooth Hinge, and Logistic for classification task, and l_1, l_2, and ϵ-insensitive for regression task. We conducted extensive experiments for classification task in batch and online modes, and regression task in online mode over several benchmark datasets. The results show that our proposed AVM achieved a comparable predictive performance with current state-of-the-art methods while simultaneously achieving significant computational speed-up due to the ability of the proposed AVM in maintaining the model size.

READ FULL TEXT

page 21

page 24

page 25

page 26

page 29

research
04/20/2023

Optimality of Robust Online Learning

In this paper, we study an online learning algorithm with a robust loss ...
research
10/24/2014

Online and Stochastic Gradient Methods for Non-decomposable Loss Functions

Modern applications in sensitive domains such as biometrics and medicine...
research
11/28/2019

Communication-Efficient Distributed Online Learning with Kernels

We propose an efficient distributed online learning protocol for low-lat...
research
10/23/2016

Online Classification with Complex Metrics

We present a framework and analysis of consistent binary classification ...
research
02/21/2018

Nonlinear Online Learning with Adaptive Nyström Approximation

Use of nonlinear feature maps via kernel approximation has led to succes...
research
09/21/2014

Analyzing sparse dictionaries for online learning with kernels

Many signal processing and machine learning methods share essentially th...

Please sign up or login with your details

Forgot password? Click here to reset