A Mixed Integer Programming Approach to Training Dense Neural Networks

01/03/2022
by   Vrishabh Patil, et al.
0

Artificial Neural Networks (ANNs) are prevalent machine learning models that have been applied across various real world classification tasks. ANNs require a large amount of data to have strong out of sample performance, and many algorithms for training ANN parameters are based on stochastic gradient descent (SGD). However, the SGD ANNs that tend to perform best on prediction tasks are trained in an end to end manner that requires a large number of model parameters and random initialization. This means training ANNs is very time consuming and the resulting models take a lot of memory to deploy. In order to train more parsimonious ANN models, we propose the use of alternative methods from the constrained optimization literature for ANN training and pretraining. In particular, we propose novel mixed integer programming (MIP) formulations for training fully-connected ANNs. Our formulations can account for both binary activation and rectified linear unit (ReLU) activation ANNs, and for the use of a log likelihood loss. We also develop a layer-wise greedy approach, a technique adapted for reducing the number of layers in the ANN, for model pretraining using our MIP formulations. We then present numerical experiments comparing our MIP based methods against existing SGD based approaches and show that we are able to achieve models with competitive out of sample performance that are significantly more parsimonious.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2021

Exploiting Spline Models for the Training of Fully Connected Layers in Neural Network

The fully connected (FC) layer, one of the most fundamental modules in a...
research
02/08/2021

Partition-based formulations for mixed-integer optimization of trained ReLU neural networks

This paper introduces a class of mixed-integer formulations for trained ...
research
07/14/2023

Structured Pruning of Neural Networks for Constraints Learning

In recent years, the integration of Machine Learning (ML) models with Op...
research
01/06/2022

Efficient Global Optimization of Two-layer ReLU Networks: Quadratic-time Algorithms and Adversarial Training

The non-convexity of the artificial neural network (ANN) training landsc...
research
02/17/2020

Identifying Critical Neurons in ANN Architectures using Mixed Integer Programming

We introduce a novel approach to optimize the architecture of deep neura...
research
04/28/2017

Maximum Resilience of Artificial Neural Networks

The deployment of Artificial Neural Networks (ANNs) in safety-critical a...
research
03/02/2023

Large Deviations for Accelerating Neural Networks Training

Artificial neural networks (ANNs) require tremendous amount of data to t...

Please sign up or login with your details

Forgot password? Click here to reset