Training neural networks using monotone variational inequality

02/17/2022
by   Xiuyuan Cheng, et al.
0

Despite the vast empirical success of neural networks, theoretical understanding of the training procedures remains limited, especially in providing performance guarantees of testing performance due to the non-convex nature of the optimization problem. Inspired by a recent work of (Juditsky Nemirovsky, 2019), instead of using the traditional loss function minimization approach, we reduce the training of the network parameters to another problem with convex structure – to solve a monotone variational inequality (MVI). The solution to MVI can be found by computationally efficient procedures, and importantly, this leads to performance guarantee of ℓ_2 and ℓ_∞ bounds on model recovery accuracy and prediction accuracy under the theoretical setting of training one-layer linear neural network. In addition, we study the use of MVI for training multi-layer neural networks and propose a practical algorithm called stochastic variational inequality (SVI), and demonstrates its applicability in training fully-connected neural networks and graph neural networks (GNN) (SVI is completely general and can be used to train other types of neural networks). We demonstrate the competitive or better performance of SVI compared to the stochastic gradient descent (SGD) on both synthetic and real network data prediction tasks regarding various performance metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2015

Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods

Training neural networks is a challenging non-convex optimization proble...
research
10/26/2022

A Variational Inequality Model for Learning Neural Networks

Neural networks have become ubiquitous tools for solving signal and imag...
research
12/30/2020

SGD Distributional Dynamics of Three Layer Neural Networks

With the rise of big data analytics, multi-layer neural networks have su...
research
08/31/2020

Efficient and Sparse Neural Networks by Pruning Weights in a Multiobjective Learning Approach

Overparameterization and overfitting are common concerns when designing ...
research
03/18/2019

Signal recovery by Stochastic Optimization

We discuss an approach to signal recovery in Generalized Linear Models (...
research
10/03/2020

Practical Precoding via Asynchronous Stochastic Successive Convex Approximation

We consider stochastic optimization of a smooth non-convex loss function...
research
11/20/2018

Fenchel Lifted Networks: A Lagrange Relaxation of Neural Network Training

Despite the recent successes of deep neural networks, the corresponding ...

Please sign up or login with your details

Forgot password? Click here to reset