
Weighted Random Search for CNN Hyperparameter Optimization
Nearly all model algorithms used in machine learning use two different s...
read it

Deep Genetic Network
Optimizing a neural network's performance is a tedious and time taking p...
read it

Efficient Hyperparameter Optimization in Deep Learning Using a Variable Length Genetic Algorithm
Convolutional Neural Networks (CNN) have gained great success in many ar...
read it

Geneticalgorithmoptimized neural networks for gravitational wave classification
Gravitationalwave detection strategies are based on a signal analysis t...
read it

Use of static surrogates in hyperparameter optimization
Optimizing the hyperparameters and architecture of a neural network is a...
read it

Online hyperparameter optimization by realtime recurrent learning
Conventional hyperparameter optimization methods are computationally int...
read it

AutoModel: Utilizing Research Papers and HPO Techniques to Deal with the CASH problem
In many fields, a mass of algorithms with completely different hyperpara...
read it
Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks
Most learning algorithms require the practitioner to manually set the values of many hyperparameters before the learning process can begin. However, with modern algorithms, the evaluation of a given hyperparameter setting can take a considerable amount of time and the search space is often very highdimensional. We suggest using a lowerdimensional representation of the original data to quickly identify promising areas in the hyperparameter space. This information can then be used to initialize the optimization algorithm for the original, higherdimensional data. We compare this approach with the standard procedure of optimizing the hyperparameters only on the original input. We perform experiments with various stateoftheart hyperparameter optimization algorithms such as random search, the tree of parzen estimators (TPEs), sequential modelbased algorithm configuration (SMAC), and a genetic algorithm (GA). Our experiments indicate that it is possible to speed up the optimization process by using lowerdimensional data representations at the beginning, while increasing the dimensionality of the input later in the optimization process. This is independent of the underlying optimization procedure, making the approach promising for many existing hyperparameter optimization algorithms.
READ FULL TEXT
Comments
There are no comments yet.