Neural Networks in Evolutionary Dynamic Constrained Optimization: Computational Cost and Benefits

01/22/2020
by   Maryam Hasani-Shoreh, et al.
0

Neural networks (NN) have been recently applied together with evolutionary algorithms (EAs) to solve dynamic optimization problems. The applied NN estimates the position of the next optimum based on the previous time best solutions. After detecting a change, the predicted solution can be employed to move the EA's population to a promising region of the solution space in order to accelerate convergence and improve accuracy in tracking the optimum. While previous works show improvement of the results, they neglect the overhead created by NN. In this work, we reflect the time spent on training NN in the optimization time and compare the results with a baseline EA. We explore if by considering the generated overhead, NN is still able to improve the results, and under which condition is able to do so. The main difficulties to train the NN are: 1) to get enough samples to generalize predictions for new data, and 2) to obtain reliable samples. As NN needs to collect data at each time step, if the time horizon is short, we will not be able to collect enough samples to train the NN. To alleviate this, we propose to consider more individuals on each change to speed up sample collection in shorter time steps. In environments with a high frequency of changes, the solutions produced by EA are likely to be far from the real optimum. Using unreliable train data for the NN will, in consequence, produce unreliable predictions. Also, as the time spent for NN stays fixed regardless of the frequency, a higher frequency of change will mean a higher produced overhead by the NN in proportion to the EA. In general, after considering the generated overhead, we conclude that NN is not suitable in environments with a high frequency of changes and/or short time horizons. However, it can be promising for the low frequency of changes, and especially for the environments that changes have a pattern.

READ FULL TEXT
research
12/06/2017

Achieving the time of 1-NN, but the accuracy of k-NN

We propose a simple approach which, given distributed computing resource...
research
05/30/2023

On the Impact of Operators and Populations within Evolutionary Algorithms for the Dynamic Weighted Traveling Salesperson Problem

Evolutionary algorithms have been shown to obtain good solutions for com...
research
05/28/2022

A Quadrature Perspective on Frequency Bias in Neural Network Training with Nonuniform Data

Small generalization errors of over-parameterized neural networks (NNs) ...
research
01/30/2021

Linear Frequency Principle Model to Understand the Absence of Overfitting in Neural Networks

Why heavily parameterized neural networks (NNs) do not overfit the data ...
research
01/16/2023

On Using Deep Learning Proxies as Forward Models in Deep Learning Problems

Physics-based optimization problems are generally very time-consuming, e...
research
03/29/2021

Rapid Risk Minimization with Bayesian Models Through Deep Learning Approximation

In this paper, we introduce a novel combination of Bayesian Models (BMs)...
research
07/16/2023

Untrained neural network embedded Fourier phase retrieval from few measurements

Fourier phase retrieval (FPR) is a challenging task widely used in vario...

Please sign up or login with your details

Forgot password? Click here to reset