Deep Optimisation: Transitioning the Scale of Evolutionary Search by Inducing and Searching in Deep Representations

05/16/2022
by   Joshua Knowles, et al.
0

We investigate the optimisation capabilities of an algorithm inspired by the Evolutionary Transitions in Individuality. In these transitions, the natural evolutionary process is repeatedly rescaled through successive levels of biological organisation. Each transition creates new higher-level evolutionary units that combine multiple units from the level below. We call the algorithm Deep Optimisation (DO) to recognise both its use of deep learning methods and the multi-level rescaling of biological evolutionary processes. The evolutionary model used in DO is a simple hill-climber, but, as higher-level representations are learned, the hill-climbing process is repeatedly rescaled to operate in successively higher-level representations. The transition process is based on a deep learning neural network (NN), specifically a deep auto-encoder. Our experiments with DO start with a study using the NP-hard problem, multiple knapsack (MKP). Comparing with state-of-the-art model-building optimisation algorithms (MBOAs), we show that DO finds better solutions to MKP instances and does so without using a problem-specific repair operator. A second, much more in-depth investigation uses a class of configurable problems to understand more precisely the distinct problem characteristics that DO can solve that other MBOAs cannot. Specifically, we observe a polynomial vs exponential scaling distinction where DO is the only algorithm to show polynomial scaling for all problems. We also demonstrate that some problem characteristics need a deep network in DO. In sum, our findings suggest that the use of deep learning principles have significant untapped potential in combinatorial optimisation. Moreover, we argue that natural evolution could be implementing something like DO, and the evolutionary transitions in individuality are the observable result.

READ FULL TEXT

page 4

page 5

page 6

page 11

page 14

page 16

page 22

research
11/02/2018

Deep Optimisation: Solving Combinatorial Optimisation Problems using Deep Neural Networks

Deep Optimisation (DO) combines evolutionary search with Deep Neural Net...
research
01/09/2014

A Parameterized Complexity Analysis of Bi-level Optimisation with Evolutionary Algorithms

Bi-level optimisation problems have gained increasing interest in the fi...
research
09/10/2020

IEO: Intelligent Evolutionary Optimisation for Hyperparameter Tuning

Hyperparameter optimisation is a crucial process in searching the optima...
research
06/01/2018

Artificial Immune Systems Can Find Arbitrarily Good Approximations for the NP-Hard Partition Problem

Typical Artificial Immune System (AIS) operators such as hypermutations ...
research
02/15/2019

On resampling vs. adjusting probabilistic graphical models in estimation of distribution algorithms

The Bayesian Optimisation Algorithm (BOA) is an Estimation of Distributi...
research
11/23/2012

Ecosystem-Oriented Distributed Evolutionary Computing

We create a novel optimisation technique inspired by natural ecosystems,...

Please sign up or login with your details

Forgot password? Click here to reset