DeepAI
Log In Sign Up

A Unified Framework for Training Neural Networks

05/23/2018
by   Hadi Ghauch, et al.
0

The lack of mathematical tractability of Deep Neural Networks (DNNs) has hindered progress towards having a unified convergence analysis of training algorithms, in the general setting. We propose a unified optimization framework for training different types of DNNs, and establish its convergence for arbitrary loss, activation, and regularization functions, assumed to be smooth. We show that framework generalizes well-known first- and second-order training methods, and thus allows us to show the convergence of these methods for various DNN architectures and learning tasks, as a special case of our approach. We discuss some of its applications in training various DNN architectures (e.g., feed-forward, convolutional, linear networks), to regression and classification tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/01/2018

Block Coordinate Descent for Deep Learning: Unified Convergence Guarantees

Training deep neural networks (DNNs) efficiently is a challenge due to t...
11/20/2017

Convergent Block Coordinate Descent for Training Tikhonov Regularized Deep Neural Networks

By lifting the ReLU function into a higher dimensional space, we develop...
09/28/2021

slimTrain – A Stochastic Approximation Method for Training Separable Deep Neural Networks

Deep neural networks (DNNs) have shown their success as high-dimensional...
04/09/2019

Automated Search for Configurations of Deep Neural Network Architectures

Deep Neural Networks (DNNs) are intensively used to solve a wide variety...
09/27/2020

Normalization Techniques in Training DNNs: Methodology, Analysis and Application

Normalization techniques are essential for accelerating the training and...
02/07/2022

Finite-Sum Optimization: A New Perspective for Convergence to a Global Solution

Deep neural networks (DNNs) have shown great success in many machine lea...