Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection

07/26/2020
by   Elizabeth Newman, et al.
11

Deep neural networks (DNNs) have achieved state-of-the-art performance across a variety of traditional machine learning tasks, e.g., speech recognition, image classification, and segmentation. The ability of DNNs to efficiently approximate high-dimensional functions has also motivated their use in scientific applications, e.g., to solve partial differential equations (PDE) and to generate surrogate models. In this paper, we consider the supervised training of DNNs, which arises in many of the above applications. We focus on the central problem of optimizing the weights of the given DNN such that it accurately approximates the relation between observed input and target data. Devising effective solvers for this optimization problem is notoriously challenging due to the large number of weights, non-convexity, data-sparsity, and non-trivial choice of hyperparameters. To solve the optimization problem more efficiently, we propose the use of variable projection (VarPro), a method originally designed for separable nonlinear least-squares problems. Our main contribution is the Gauss-Newton VarPro method (GNvpro) that extends the reach of the VarPro idea to non-quadratic objective functions, most notably, cross-entropy loss functions arising in classification. These extensions make GNvpro applicable to all training problems that involve a DNN whose last layer is an affine mapping, which is common in many state-of-the-art architectures. In numerical experiments from classification and surrogate modeling, GNvpro not only solves the optimization problem more efficiently but also yields DNNs that generalize better than commonly-used optimization schemes.

READ FULL TEXT

page 4

page 15

page 16

page 17

page 24

research
09/28/2021

slimTrain – A Stochastic Approximation Method for Training Separable Deep Neural Networks

Deep neural networks (DNNs) have shown their success as high-dimensional...
research
05/16/2019

An Information Theoretic Interpretation to Deep Neural Networks

It is commonly believed that the hidden layers of deep neural networks (...
research
09/14/2023

Multi-Grade Deep Learning for Partial Differential Equations with Applications to the Burgers Equation

We develop in this paper a multi-grade deep learning method for solving ...
research
12/11/2020

Deep Neural Networks Are Effective At Learning High-Dimensional Hilbert-Valued Functions From Limited Data

The accurate approximation of scalar-valued functions from sample points...
research
12/19/2018

Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks

In recent years, deep neural networks (DNNs) have been applied to variou...
research
11/01/2019

Review: Ordinary Differential Equations For Deep Learning

To better understand and improve the behavior of neural networks, a rece...
research
11/21/2021

Differentiable Projection for Constrained Deep Learning

Deep neural networks (DNNs) have achieved extraordinary performance in s...

Please sign up or login with your details

Forgot password? Click here to reset