DeepAI AI Chat
Log In Sign Up

Iterative Regularization for Learning with Convex Loss Functions

by   Junhong Lin, et al.

We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieved by (early) stopping an empirical iteration. We consider a nonparametric setting, in the framework of reproducing kernel Hilbert spaces, and prove finite sample bounds on the excess risk under general regularity conditions. Our study provides a new class of efficient regularized learning algorithms and gives insights on the interplay between statistics and optimization in machine learning.


page 1

page 2

page 3

page 4


Localisation of Regularised and Multiview Support Vector Machine Learning

We prove a few representer theorems for a localised version of the regul...

A Consistent Regularization Approach for Structured Prediction

We propose and analyze a regularization approach for structured predicti...

Learning with incremental iterative regularization

Within a statistical learning setting, we propose and study an iterative...

Beyond Tikhonov: Faster Learning with Self-Concordant Losses via Iterative Regularization

The theory of spectral filtering is a remarkable tool to understand the ...

Moreau–Yosida regularization in DFT

Moreau-Yosida regularization is introduced into the framework of exact D...

Learning Near-optimal Convex Combinations of Basis Models with Generalization Guarantees

The problem of learning an optimal convex combination of basis models ha...

Early stopping for kernel boosting algorithms: A general analysis with localized complexities

Early stopping of iterative algorithms is a widely-used form of regulari...