DeepAI AI Chat
Log In Sign Up

On Convergence Property of Implicit Self-paced Objective

by   Zilu Ma, et al.
Xi'an Jiaotong University

Self-paced learning (SPL) is a new methodology that simulates the learning principle of humans/animals to start learning easier aspects of a learning task, and then gradually take more complex examples into training. This new-coming learning regime has been empirically substantiated to be effective in various computer vision and pattern recognition tasks. Recently, it has been proved that the SPL regime has a close relationship to a implicit self-paced objective function. While this implicit objective could provide helpful interpretations to the effectiveness, especially the robustness, insights under the SPL paradigms, there are still no theoretical results strictly proved to verify such relationship. To this issue, in this paper, we provide some convergence results on this implicit objective of SPL. Specifically, we prove that the learning process of SPL always converges to critical points of this implicit objective under some mild conditions. This result verifies the intrinsic relationship between SPL and this implicit objective, and makes the previous robustness analysis on SPL complete and theoretically rational.


page 1

page 2

page 3

page 4


Self-Paced Learning: an Implicit Regularization Perspective

Self-paced learning (SPL) mimics the cognitive mechanism of humans and a...

Temporal-difference learning for nonlinear value function approximation in the lazy training regime

We discuss the approximation of the value function for infinite-horizon ...

Learning from Incremental Directional Corrections

This paper proposes a technique which enables a robot to learn a control...

Self-Supervised Contextual Bandits in Computer Vision

Contextual bandits are a common problem faced by machine learning practi...

Implicit Regularization Properties of Variance Reduced Stochastic Mirror Descent

In machine learning and statistical data analysis, we often run into obj...

Understanding Self-Paced Learning under Concave Conjugacy Theory

By simulating the easy-to-hard learning manners of humans/animals, the l...

Data Transformation Insights in Self-supervision with Clustering Tasks

Self-supervision is key to extending use of deep learning for label scar...