Self-Paced Learning: an Implicit Regularization Perspective

06/01/2016
by   Yanbo Fan, et al.
0

Self-paced learning (SPL) mimics the cognitive mechanism of humans and animals that gradually learns from easy to hard samples. One key issue in SPL is to obtain better weighting strategy that is determined by minimizer function. Existing methods usually pursue this by artificially designing the explicit form of SPL regularizer. In this paper, we focus on the minimizer function, and study a group of new regularizer, named self-paced implicit regularizer that is deduced from robust loss function. Based on the convex conjugacy theory, the minimizer function for self-paced implicit regularizer can be directly learned from the latent loss function, while the analytic form of the regularizer can be even known. A general framework (named SPL-IR) for SPL is developed accordingly. We demonstrate that the learning procedure of SPL-IR is associated with latent robust loss functions, thus can provide some theoretical inspirations for its working mechanism. We further analyze the relation between SPL-IR and half-quadratic optimization. Finally, we implement SPL-IR to both supervised and unsupervised tasks, and experimental results corroborate our ideas and demonstrate the correctness and effectiveness of implicit regularizers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2016

Efficient Learning with a Family of Nonconvex Regularizers by Redistributing Nonconvexity

The use of convex regularizers allows for easy optimization, though they...
research
09/15/2022

GAGA: Deciphering Age-path of Generalized Self-paced Regularizer

Nowadays self-paced learning (SPL) is an important machine learning para...
research
03/29/2017

On Convergence Property of Implicit Self-paced Objective

Self-paced learning (SPL) is a new methodology that simulates the learni...
research
10/21/2022

Bridging the Gap Between Target Networks and Functional Regularization

Bootstrapping is behind much of the successes of Deep Reinforcement Lear...
research
12/09/2022

Predictor networks and stop-grads provide implicit variance regularization in BYOL/SimSiam

Self-supervised learning (SSL) learns useful representations from unlabe...
research
08/02/2023

Wasserstein Diversity-Enriched Regularizer for Hierarchical Reinforcement Learning

Hierarchical reinforcement learning composites subpolicies in different ...
research
09/20/2023

Using Property Elicitation to Understand the Impacts of Fairness Constraints

Predictive algorithms are often trained by optimizing some loss function...

Please sign up or login with your details

Forgot password? Click here to reset