Annealing Optimization for Progressive Learning with Stochastic Approximation

09/06/2022
by   Christos Mavridis, et al.
2

In this work, we introduce a learning model designed to meet the needs of applications in which computational resources are limited, and robustness and interpretability are prioritized. Learning problems can be formulated as constrained stochastic optimization problems, with the constraints originating mainly from model assumptions that define a trade-off between complexity and performance. This trade-off is closely related to over-fitting, generalization capacity, and robustness to noise and adversarial attacks, and depends on both the structure and complexity of the model, as well as the properties of the optimization methods used. We develop an online prototype-based learning algorithm based on annealing optimization that is formulated as an online gradient-free stochastic approximation algorithm. The learning model can be viewed as an interpretable and progressively growing competitive-learning neural network model to be used for supervised, unsupervised, and reinforcement learning. The annealing nature of the algorithm contributes to minimal hyper-parameter tuning requirements, poor local minima prevention, and robustness with respect to the initial conditions. At the same time, it provides online control over the performance-complexity trade-off by progressively increasing the complexity of the learning model as needed, through an intuitive bifurcation phenomenon. Finally, the use of stochastic approximation enables the study of the convergence of the learning algorithm through mathematical tools from dynamical systems and control, and allows for its integration with reinforcement learning algorithms, constructing an adaptive state-action aggregation scheme.

READ FULL TEXT

page 1

page 13

research
02/11/2021

Online Deterministic Annealing for Classification and Clustering

We introduce an online prototype-based learning algorithm for clustering...
research
12/04/2021

Towards the One Learning Algorithm Hypothesis: A System-theoretic Approach

The existence of a universal learning architecture in human cognition is...
research
12/15/2022

Multi-Resolution Online Deterministic Annealing: A Hierarchical and Progressive Learning Architecture

Hierarchical learning algorithms that gradually approximate a solution t...
research
12/25/2015

Bridging the Gap between Stochastic Gradient MCMC and Stochastic Optimization

Stochastic gradient Markov chain Monte Carlo (SG-MCMC) methods are Bayes...
research
06/22/2022

Projection-free Constrained Stochastic Nonconvex Optimization with State-dependent Markov Data

We study a projection-free conditional gradient-type algorithm for const...
research
03/03/2020

ABC-LMPC: Safe Sample-Based Learning MPC for Stochastic Nonlinear Dynamical Systems with Adjustable Boundary Conditions

Sample-based learning model predictive control (LMPC) strategies have re...
research
10/01/2018

Taming VAEs

In spite of remarkable progress in deep latent variable generative model...

Please sign up or login with your details

Forgot password? Click here to reset