
Hybrid evolutionary algorithm with extreme machine learning fitness function evaluation for twostage capacitated facility location problem
This paper considers the twostage capacitated facility location problem...
read it

Machine Learning Based Path Planning for Improved Rover Navigation (PrePrint Version)
Enhanced AutoNav (ENav), the baseline surface navigation software for NA...
read it

Asynchronous Parallel Sampling Gradient Boosting Decision Tree
With the development of big data technology, Gradient Boosting Decision ...
read it

Learning Bayesian Network Structure from Massive Datasets: The "Sparse Candidate" Algorithm
Learning Bayesian networks is often cast as an optimization problem, whe...
read it

Rapid building detection using machine learning
This work describes algorithms for performing discrete object detection,...
read it

A Scalable Method for Scheduling Distributed Energy Resources using Parallelized Populationbased Metaheuristics
Recent years have seen an increasing integration of distributed renewabl...
read it

Multiellipses detection on images inspired by collective animal behavior
This paper presents a novel and effective technique for extracting multi...
read it
Machine learning for improving performance in an evolutionary algorithm for minimum path with uncertain costs given by massively simulated scenarios
In this work we introduce an implementation for which machine learning techniques helped improve the overall performance of an evolutionary algorithm for an optimization problem, namely a variation of robust minimumcost path in graphs. In this big data optimization problem, a path achieving a good cost in most scenarios from an available set of scenarios (generated by a simulation process) must be obtained. The most expensive task of our evolutionary algorithm, in terms of computational resources, is the evaluation of candidate paths: the fitness function must calculate the cost of the candidate path in every generated scenario. Given the large number of scenarios, this task must be implemented in a distributed environment. We implemented gradient boosting decision trees to classify candidate paths in order to identify good candidates. The cost of the notsogood candidates is simply forecasted. We studied the training process, gain performance, accuracy, and other variables. Our computational experiments show that the computational performance was significantly improved at the expense of a limited loss of accuracy.
READ FULL TEXT
Comments
There are no comments yet.