Acceleration techniques for optimization over trained neural network ensembles

12/13/2021
by   Keliang Wang, et al.
0

We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit (ReLU) activation. Recent literature has explored the use of a single neural network to model either uncertain or complex elements within an objective function. However, it is well known that ensembles of neural networks produce more stable predictions and have better generalizability than models with single neural networks, which suggests the application of ensembles of neural networks in a decision-making pipeline. We study how to incorporate a neural network ensemble as the objective function of an optimization model and explore computational approaches for the ensuing problem. We present a mixed-integer linear program based on existing popular big-M formulations for optimizing over a single neural network. We develop two acceleration techniques for our model, the first one is a preprocessing procedure to tighten bounds for critical neurons in the neural network while the second one is a set of valid inequalities based on Benders decomposition. Experimental evaluations of our solution methods are conducted on one global optimization problem and two real-world data sets; the results suggest that our optimization algorithm outperforms the adaption of an state-of-the-art approach in terms of computational time and optimality gaps.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2012

Classification by Ensembles of Neural Networks

We introduce a new procedure for training of artificial neural networks ...
research
11/20/2021

Modeling Design and Control Problems Involving Neural Network Surrogates

We consider nonlinear optimization problems that involve surrogate model...
research
02/15/2021

Scaling Up Exact Neural Network Compression by ReLU Stability

We can compress a neural network while exactly preserving its underlying...
research
05/27/2022

Optimizing Objective Functions from Trained ReLU Neural Networks via Sampling

This paper introduces scalable, sampling-based algorithms that optimize ...
research
07/07/2020

An Integer Programming Approach to Deep Neural Networks with Binary Activation Functions

We study deep neural networks with binary activation functions (BDNN), i...
research
02/02/2023

Physics Informed Piecewise Linear Neural Networks for Process Optimization

Constructing first-principles models is usually a challenging and time-c...
research
04/05/2021

Fast Design Space Exploration of Nonlinear Systems: Part II

Nonlinear system design is often a multi-objective optimization problem ...

Please sign up or login with your details

Forgot password? Click here to reset