Iterative Regularization for Learning with Convex Loss Functions

03/31/2015
by   Junhong Lin, et al.
0

We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieved by (early) stopping an empirical iteration. We consider a nonparametric setting, in the framework of reproducing kernel Hilbert spaces, and prove finite sample bounds on the excess risk under general regularity conditions. Our study provides a new class of efficient regularized learning algorithms and gives insights on the interplay between statistics and optimization in machine learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2023

Localisation of Regularised and Multiview Support Vector Machine Learning

We prove a few representer theorems for a localised version of the regul...
research
05/24/2016

A Consistent Regularization Approach for Structured Prediction

We propose and analyze a regularization approach for structured predicti...
research
04/30/2014

Learning with incremental iterative regularization

Within a statistical learning setting, we propose and study an iterative...
research
06/16/2021

Beyond Tikhonov: Faster Learning with Self-Concordant Losses via Iterative Regularization

The theory of spectral filtering is a remarkable tool to understand the ...
research
08/10/2022

Moreau–Yosida regularization in DFT

Moreau-Yosida regularization is introduced into the framework of exact D...
research
10/09/2019

Learning Near-optimal Convex Combinations of Basis Models with Generalization Guarantees

The problem of learning an optimal convex combination of basis models ha...
research
07/05/2017

Early stopping for kernel boosting algorithms: A general analysis with localized complexities

Early stopping of iterative algorithms is a widely-used form of regulari...

Please sign up or login with your details

Forgot password? Click here to reset