
Achieving the time of 1NN, but the accuracy of kNN
We propose a simple approach which, given distributed computing resource...
read it

Classifying Topological Charge in SU(3) YangMills Theory with Machine Learning
We apply a machine learning technique for identifying the topological ch...
read it

Simultaneous Optimization of Neural Network Weights and Active Nodes using Metaheuristics
Optimization of neural network (NN) significantly influenced by the tran...
read it

Enabling SimulationBased Optimization Through Machine Learning: A Case Study on Antenna Design
Complex phenomena are generally modeled with sophisticated simulators th...
read it

Neural Spectrum Alignment
Expressiveness of deep models was recently addressed via the connection ...
read it

Correspondence Analysis Using Neural Networks
Correspondence analysis (CA) is a multivariate statistical tool used to ...
read it

Approximate Integrals Over Bounded Volumes with Smooth Boundaries
A Radial Basis Function Generated FiniteDifferences (RBFFD) inspired t...
read it
An effective algorithm for hyperparameter optimization of neural networks
A major challenge in designing neural network (NN) systems is to determine the best structure and parameters for the network given the data for the machine learning problem at hand. Examples of parameters are the number of layers and nodes, the learning rates, and the dropout rates. Typically, these parameters are chosen based on heuristic rules and manually finetuned, which may be very timeconsuming, because evaluating the performance of a single parametrization of the NN may require several hours. This paper addresses the problem of choosing appropriate parameters for the NN by formulating it as a boxconstrained mathematical optimization problem, and applying a derivativefree optimization tool that automatically and effectively searches the parameter space. The optimization tool employs a radial basis function model of the objective function (the prediction accuracy of the NN) to accelerate the discovery of configurations yielding high accuracy. Candidate configurations explored by the algorithm are trained to a small number of epochs, and only the most promising candidates receive full training. The performance of the proposed methodology is assessed on benchmark sets and in the context of predicting drugdrug interactions, showing promising results. The optimization tool used in this paper is opensource.
READ FULL TEXT
Comments
There are no comments yet.