Controlled Descent Training

03/16/2023
by   Viktor Andersson, et al.
0

In this work, a novel and model-based artificial neural network (ANN) training method is developed supported by optimal control theory. The method augments training labels in order to robustly guarantee training loss convergence and improve training convergence rate. Dynamic label augmentation is proposed within the framework of gradient descent training where the convergence of training loss is controlled. First, we capture the training behavior with the help of empirical Neural Tangent Kernels (NTK) and borrow tools from systems and control theory to analyze both the local and global training dynamics (e.g. stability, reachability). Second, we propose to dynamically alter the gradient descent training mechanism via fictitious labels as control inputs and an optimal state feedback policy. In this way, we enforce locally ℋ_2 optimal and convergent training behavior. The novel algorithm, Controlled Descent Training (CDT), guarantees local convergence. CDT unleashes new potentials in the analysis, interpretation, and design of ANN architectures. The applicability of the method is demonstrated on standard regression and classification problems.

READ FULL TEXT
research
02/11/2023

A Policy Gradient Framework for Stochastic Optimal Control Problems with Global Convergence Guarantee

In this work, we consider the stochastic optimal control problem in cont...
research
12/08/2020

Convergence Rates for Multi-classs Logistic Regression Near Minimum

Training a neural network is typically done via variations of gradient d...
research
02/19/2021

A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions

Gradient descent optimization algorithms are the standard ingredients th...
research
06/14/2023

Are training trajectories of deep single-spike and deep ReLU network equivalent?

Communication by binary and sparse spikes is a key factor for the energy...
research
05/31/2021

Why does CTC result in peaky behavior?

The peaky behavior of CTC models is well known experimentally. However, ...
research
07/28/2020

When and why PINNs fail to train: A neural tangent kernel perspective

Physics-informed neural networks (PINNs) have lately received great atte...
research
01/06/2022

Efficient Global Optimization of Two-layer ReLU Networks: Quadratic-time Algorithms and Adversarial Training

The non-convexity of the artificial neural network (ANN) training landsc...

Please sign up or login with your details

Forgot password? Click here to reset