Metaheuristic Algorithms for Convolution Neural Network

10/06/2016 ∙ by L. M. Rasdi Rere, et al. ∙ University of Indonesia 0

A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Abstract

A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, i.e. simulated annealing, differential evolution and harmony search, to optimize CNN. The performance of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 5.73 percent).

Keywords— metaheuristic, convolution neural network, deep learning, simulated annealing, differential evolution, harmony search

Introduction

Deep learning (DL) is mainly motivated by the research of artificial intelligent, in which the general goal is to imitate the ability of human brain to observe, analyze, learn and make a decision, especially for complex problem [13]

. This technique is in the intersection amongst the research area of signal processing, neural network, graphical modeling, optimization and pattern recognition. The current reputation of DL is implicitly due to drastically improve the abilities of chip processing, significantly decrease the cost of computing hardware and advance research in machine learning and signal processing

[10].

In general, the model of DL technique can be classified into discriminative models, generative models, and hybrid model [10]

. Discriminative models, for instance, are CNN, deep neural networks, and recurrent neural network. Some examples of generative models are deep belief networks (DBN), restricted Boltzmann machine, regularized autoencoders, and deep Boltzmann machines. On the other hand, hybrid model refers to the deep architecture use the combination of a discriminative and generative model. An example of this model is DBN to pre-train deep CNN, which can improve the performance of deep CNN over random initialization. Among all of hybrid DL techniques, this paper focuses on metaheuristic optimization for training a CNN.

Although the sound character of DL to solve a variety of learning tasks, training is difficult[18] [3] [17]

. Some examples of successful methods for training DL are Stochastic Gradient Descent, Conjugate gradient, Hessian-free Optimization and Krylov Subspace Descent.

Stochastic Gradient Descent is easy to implement and also fast in the process for a case with many training samples. However, this method needs several manual tuning to make its parameters optimal, and also its process is principally sequential, as a result, it is hard to parallelize them with GPUs. Conjugate Gradient (CG) on the other side is easier to check for convergence as well as more stable to train. Nevertheless, CG is slow, so that it needs multicore CPUs and availability of a vast number of RAMs [9].

Hessian-free optimization (HFO) has been applied to train deep auto-encoders[12], proficient in handling under fitting problem, and more efficient than pre-training + fine tuning proposed by Hinton and Salakhutdinov [4]. On the other side, Krylov Subspace Descent (KSD) is more robust and simpler than HFO as well as look like to work better for the classification performance and optimization speed. However, KSD needs more memory than HFO [15].

In fact, techniques of modern optimization are heuristic or metaheuristic. These optimization techniques have been applied to solve any optimization problems in the research area of science, engineering, and even industry [20]

. However, research about metaheuristic for optimize deep learning method is rarely conducted. One of paper is the combining of genetic algorithm (GA) and CNN, proposed by You Zhining and Pu Yunming

[22].Their model select the CNN characteristic by the process of recombination and mutation on GA, in which the model of CNN exists as individual in the algorithm of GA. Besides, in recombination process, only the layers weights and threshold value of C1 (convolution in first layer) and C3 (convolution in third layer) are changed in CNN model.

In this paper, we compared the performance of three metaheuristic algorithms, i.e. simulated annealing (SA), differential evolution (DE) and harmony search (HS), for optimizing CNN.. The strategies by looking for the best value of the fitness function on the last layer using metaheuristic algorithm, then the results will be used again to calculate the weights and biases in the previous layer. In case of testing the performance of the proposed methods, we use MNIST dataset. This dataset is images of digital handwritten digits, in which it contains 60,000 training data and 10,000 testing data. All of the images have been centered and standardized with the size of 28 x 28 pixels. Each pixel of the image is represented by 0 for black, 255 for white and in between is a different shade of gray [11].

This paper is organized as follow: Section 1 is an introduction, Section 2 explains about the used metaheuristic algorithms, Section 3 describe the convolution neural networks, Section 4 gives a description of the proposed methods, Section 5 present result of simulation, and Section 6 is the conclusion.

1 Metaheuristic algorithms

Metaheuristic is well-known as an efficient method for hard optimization problems, i.e. the problems that cannot be solved optimally using deterministic approach within a reasonable time limit. Metaheuristic methods work for three main purposes: for fast solving problem, for solving large problems, for making a more robust algorithm. These methods are also simple to design as well as flexible and easy to implement [2].

In general, metaheuristic algorithms use the combination of rules and randomization to duplicate the phenomena of nature. The biological system imitation of metaheuristic algorithm, for instance, are evolution strategy, GA, and DE. Phenomena of ethology for examples are particle swarm optimization (PSO), bee colony optimization (BCO), bacterial foraging optimization algorithms (BFOA), and ant colony optimization (ACO). Phenomena of physic are SA, microcanonical annealing and threshold accepting method

[1]. Another form of metaheuristic is inspired by music phenomena, such as HS algorithm [8].

Classification of metaheuristic algorithm can also be divided into single-solution based and population-based. Some of the examples for single-solution based metaheuristic are the noising method, tabu search, SA, TA, and guided local search. In the case of metaheuristic based on population, it can be classified into swarm intelligent and evolutionary computation. The general term of swarm intelligent is inspired by the collective behavior of social insect colonies or animal societies. Examples of these algorithms are GP, GA, ES, and DE. On the other side, the algorithm for evolutionary computation takes inspiration from the principles of Darwinian for developing adaptation into their environment. Some examples of these algorithms are PSO, BCO, ACO, and BFOA

[1]. Among of all these metaheuristic algorithms, SA, DE and HS are used in this paper.

1.1 Simulated Annealing algorithm

SA is a technique of random search for the problem of global optimization. It mimics the process of annealing in material processing[20]. This technique was firstly proposed in 1983 by Kirkpatrick, Gelatt, and Vecchi [5].

The principle idea of SA is using random search, which not only allows changes that improve the fitness function but also maintaining some changes that are not ideal. As example, in minimum optimization problem, any better changes that decrease the fitness function value will be accepted, but some changes that increase

will also be accepted with a transition probability (

) as follow:

(1)

where is the energy level changes, is the Boltzmann’s constant, and is temperature for controlling the process of annealing. This equation is based on the Boltzmann distribution in physics [20]. The following is standard procedure of SA for optimization problems:

  1. Generate the solution vector:

    The initial solution vector is randomly selected, and then the fitness function is calculated.

  2. Initialize the temperature: If the temperature value is too high, it will take a long time to reach convergence, whereas too small value can cause the system missed the global optimum.

  3. Select a new solution: A new solution is randomly selected from the neighborhood of the current solution.

  4. Evaluate a new solution: A new solution is accepted as a new current solution depending on its fitness function.

  5. Decrease the temperature: During the search process, the temperature is periodically decreased.

  6. Stop or repeat: The computation is stopped when the termination criterion is satisfied. Otherwise, step 2 and 6 are repeated.

1.2 Differential Evolution algorithm

Differential Evolution is firstly proposed by Price and Storn in 1995, to solve the Chebyshev polynomial problem [1]. This algorithm is created on individual’s difference, exploiting random search in the space of solution, and finally operate the procedure of mutation, crossover, as well as selection to obtain the suitable individual in system [14].

There are some types in DE, including the classical form is DE/rand/1/bin, it indicates that in the process of mutation, the target vector is randomly selected, and only a single different vector is applied. The acronym of bin shows that crossover process is organized by a rule of binomial decision. The procedure of DE algorithm is shown by the following steps:

  1. Determining parameter setting: Population size is the number of individuals. Mutation factor (F) control the magnification of the two individual differences to avoid search stagnation. Crossover rate (CR) decides how many consecutive genes of the mutated vector are copied to the offspring.

  2. Initialization of population: The population is produced by randomly generating the vectors in the suitable search range.

  3. Evaluation of individual: Each of individual is evaluated by calculating their objective function.

  4. Mutation operation: Mutation adds identical variable to one or more vector parameters. In this operation, three auxiliary parents are selected randomly, in which they will participate in mutation operation to create a mutated individual as follows:

    (2)
  5. where and .

  6. Combination operation: Recombination (cross over) is applied after mutation operation.

  7. Selection operation: This operation determines the offspring in the next generation should become a member of the population or not.

  8. Stopping criterion: The current generation is substituted by the new generation until the criterion of termination is satisfied.

1.3 Harmony Search algorithm

Harmony Search algorithm is proposed by Geem et al. in 2001 [8]. This algorithm is inspired by the musical process of searching for a perfect state of harmony. Like harmony in music, solution vector of optimization and improvisation from the musician are analogous to structures of local and global search in optimization techniques.

In improvisation of the music, the players sound any pitch in the possible range together that can create one vector of harmony. In the case of pitches create a real harmony; this experience is stored in the memory of each player and they have the opportunity to create better harmony next time [8]. There are three possible alternatives when one pitch is improvised by a musician: any one pitch is played from her/his memory, a nearby pitch is played from her/his memory and an entirely random pitch are played with the range of possible sound. If these options are used for optimization, they have three equivalent components; the use of harmony memory, pitch adjusting, and randomization. In HS algorithm, these rules are correlated with two relevant parameters, i.e. harmony consideration rate (HMCR) and pitch adjusting rate (PAR). The procedure of HS algorithm can be summarized into five steps as follows [8]:

  1. Initialize the problem and parameters: In this algorithm, the problem can be maximum or minimum optimization, and the relevant parameters are HMCR, PAR, size of harmony memory and termination criterion.

  2. Initialize harmony memory: The harmony memory (HM) is usually initialized as a matrix that is created randomly as a vector of solution and arrange based on the objective function.

  3. Improve a new harmony: A vector of new harmony is produced from HM based on HMCR, PAR, and randomization. Selection of new value based on HMCR parameter by range 0 and 1. The vector of new harmony is observed to decide whether it should be pitch-adjusted using PAR parameter. The process of pitch adjusting is executed only after a value is selected from HM.

  4. Update harmony memory: The new harmony substitutes the worst harmony in terms of the value of the fitness function, in which the fitness function of new harmony is better than worst harmony.

  5. Repeat (3) and (4) until satisfying the termination criterion: In the case of meeting the termination criterion, the computation is ended. Alternatively, process (3) and (4) are reiterated. In the end, the vector of the best HM is nominated and is reflected as the best solution for the problem.

2 Convolution Neural Network

Convolution Neural Network is a variant of the standard multilayer perceptron (MLP). A substantial advantage of this method, especially for pattern recognition compared with conventional approaches is due to its capability in reducing the dimension of data, extracting the feature sequentially, and classifying in one structure of network

[21]. The basic architecture model of CNN is inspired in 1962, from visual cortex proposed by Hubel and Wiesel.

In 1980, Fukushima’s Neocognitron created the first computation of this model, and then in 1989, following the idea of Fukushima, LeCun et al. found the state-of-the-art performance on a number of tasks for pattern recognition using error gradient method [7].

The classical CNN by LeCun et al. is an extension of traditional MLP based on three ideas: local receive fields, weights sharing, and spatial/temporal sub-sampling. These ideas can be organized into two types of layers, which are convolution layers and subsampling layers. As is showed in Fig.1, the processing layers contain three convolution layers C1, C3, and C5, combined in between with two sub-sampling layers S2 and S4, and output layer F6. These convolution and sub-sampling layers are structured into planes called features maps.

In convolution layer, each neuron is linked locally to a small input region (local receptive field) in the preceding layer. All neurons with similar feature maps obtain data from different input regions until the whole of plane input is skimmed, but the same of weights is shared (weights sharing).

Figure 1: Architecture of CNN by LeCun et al (LeNet-5)

In sub-sampling layer, the feature maps are spatially down-sampled, in which the size of the map is reduced by a factor 2. As an example, the feature map in layer C3 of size 10x10 is sub-sampled to a conforming feature map of size 5x5 in the subsequent layer S4. The last layer is F6 that is the process of classification [7].

Principally, a convolution layer is correlated with some feature maps, the size of the kernel, and connections to the previous layer. Each feature maps is the results of a sum of convolution from the maps of the previous layer, by their corresponding kernel and a linear filter. Adding a bias term and applying it to a non-linear function. The k-th feature map with the weights and bias is obtained using the function as follow:

(3)

The purpose of a sub-sampling layer is to reach spatial invariance by reducing the resolution of feature maps, in which each pooled feature map relates to one feature map of the preceding layer. The sub-sampling function, where is the inputs, is a trainable scalar, and is trainable bias, is given by the following equation:

(4)

After several convolution and sub-sampling, the last structure is classification layer. This layer works as an input for a series of fully connected layers that will execute the classification task. It has one output neuron every class label, and in the case of MNIST dataset, this layer contains ten neurons corresponds to their classes.

3 Design of proposed methods

The architecture of this proposed method refers to a simple CNN structure (LeNet-5), not a complex structure like AlexNet[6]. We use two variations of design structure. First is i-6c-2s-12c-2s, where the number of C1 is 6, and C2 is 12. Second is i-8c-2s-16c-2s, where the number of C1 is 8 and C2 is 18. The kernel size of all convolution layer is 5x5, and the scale of sub-sampling is 2.These architecture is designed for recognizing handwritten digits from MNIST dataset.

In this proposed method, SA, DE and HS algorithm are used to train CNN (CNNSA, CNNDE, CNNHS) to find the condition of best accuracy and also to minimize estimated error and indicator of network complexity. This objective can be realized by computing the lost function of vector solution or the standard error on the training set. The following is the lost function used in this paper:

(5)

where is the expected output, is the real output and

is some training samples. In the case of termination criterion, two situations are used in this method. The first is when the maximum iteration has been reached and the second is when the loss function is less than a certain constant. Both conditions mean that the most optimal state has been achieved.

3.1 Design of CNNSA method

Principally, algorithm on CNN computes the values of weight and bias, in which on the last layer they are used to calculate the lost function. These values of weight and bias in the last layer are used as solution vector, denoted as , to be optimized in SA algorithm, by adding randomly.

The

is the essential aspect of this proposed method. Selection in the proper of this value will significantly increase the accuracy. For example in CNNSA to one epoch, if

rand, then the accuracy is 88.12, in which this value is 5.73 greater than the original CNN (82.39). However, if rand, its accuracy is 85.79 and its value is only 3.40 greater than the original CNN.

Furthermore, this solution vector is updated based on SA algorithm. When the termination criterion is satisfied, all of weights and biases are updated for all layers in the system. The following is the CNNSA algorithm of the proposed method.

Result: accuracy, time
initialization and set-up: i-6c-2s-12c-2s ;
calculation process: weights (), biases (), lost function ;
solution vector (): and on the last layer;
while termination criteria is not satisfied do
       for number of x’ do
             ;
             if  then
                   ;
                  
            else
                   with a transition probability ();
                  
             end if
            
       end for
      decrease the temperature: ;
       update for all layer;
end while
Algorithm 1 CNNSA

3.2 Design of CNNDE method

At the first time, this method computes all the values of weight and bias. The values of weight and bias on the last layer () are used to calculate the lost function, and then by adding randomly, these new values are used to initialize the individuals in the population.

Result: accuracy, time
initialization and set-up: i-6c-2s-12c-2s ;
calculation process: weights (), biases (), lost function ;
individual in population: and on the last layer;
while termination criteria is not satisfied do
       for each of individual in population  do
             select auxiliary parents ;
             create offspring using mutation and recombination;
             Best ;
            
       end for
      M=M + 1 ;
       update for all layer;
end while
Algorithm 2 CNNDE

Similar to CNNSA method, selection in the proper of will significantly increase the value of accuracy. In the case of one epoch in CNNDE as an example, if rand, then the accuracy is 86.30, in which this value is 3.91 greater than the original CNN (82.39). However, if rand, its accuracy is 85.51.

Furthermore, these individual in the population are updated based on the of DE algorithm. When the termination criterion is satisfied, all of weights and biases are updated for all layers in the system. The following is the CNNDE algorithm of the proposed method.

3.3 Design of CNNHS method

At the first time like CNNSA and CNNDE, this method computes all the values of weight and bias. The values of weight and bias on the last layer () are used to calculate the lost function, and then by adding randomly, these new values are used to initialize the harmony memory.

In this method, is also an important aspect, while selection the proper of will significantly increase the value of accuracy. For example of one epoch in CNNHS (i-8c-2s-16c-2s), if rand, then the accuracy is 87.23, in which this value is 7.14 greater than the original CNN (80.09). However, if rand, its accuracy is 80.23, the value is only 0.14 greater than CNN.

Furthermore, these harmony memory is updated based on the HS algorithm. When the termination criterion is satisfied, all of weights and biases are updated for all layers in the system. The following is the CNNHS algorithm of the proposed method.

Result: accuracy, time
initialization and set-up: i-6c-2s-12c-2s ;
calculation process: weights (), biases (), lost function ;
harmony memory : and on the last layer;
while termination criterion is not satisfied do
       for number of search do
             if  then
                   from HM ;
                  
            else
                   if   then
                        
                  else
                        
                   end if
                  
             end if
            
       end for
      
end while
Algorithm 3 CNNHS

4 Simulation and results

In this paper, the primary goal is to improve the accuracy of original CNN by using SA, DE, and HS algorithm. This can be performed by minimizing the classification task error tested on the MNIST dataset. Some of the examples image for MNIST dataset are shown in Fig.2.

Figure 2: Example of some images from MNIST data-set

In CNNSA experiment, the size of neighborhood was set = 10 and maximum of iteration (maxit) = 10. In CNNDE, the population size = 10 and maxit = 10. In CNNHS, the harmony memory size = 10 and maxit = 10. Since it is difficult to make sure the control of parameter, in all of the experiment the values of c = 0.5 for SA, F = 0.8 and cr = 0.3 for DE, as well as HMCR = 0.8 and PAR = 0.3 for HS. We also set the parameter of CNN, i.e., the learning rate () and the batch size (100).

As for the epoch parameter, the number of epoch 1 to 10 for every experiment. All of the experiment was implemented in MATLAB-R2011a, on a personal computer with processor Intel Core i7-4500u, 8 GB RAM running memory, in Window 10, with five separate runtimes. The original program of this simulation is DeepLearn Toolbox from Palm[16].

Epoch CNN CNNSA CNNDE CNNHS
Acc. Std.Dev. Acc. Std.Dev. Acc. Std.Dev. Acc. Std.Dev.
1 82.39 n/a 88.12 0.39 86.30 0.33 87.23 0.95
2 89.06 n/a 92.77 0.43 91.33 0.19 91.20 0.33
3 91.13 n/a 94.61 0.31 93.45 0.28 93.24 0.40
4 92.33 n/a 95.57 0.16 94.63 0.44 93.77 0.12
5 93.11 n/a 96.29 0.14 95.15 0.15 94.89 0.33
6 93.67 n/a 96.61 0.18 95.67 0.20 95.17 0.43
7 94.25 n/a 96.72 0.12 96.28 0.20 95.65 0.20
8 94.77 n/a 96.99 0.11 96.59 0.11 96.08 0.24
9 95.37 n/a 97.11 0.06 96.68 0.17 96.16 0.11
10 95.45 n/a 97.37 0.14 96.86 0.10 96.98 0.06
Table 1:

Accuracy (Acc.) and its standard deviation (Std.Dev) for design: i-2s-6c-2s-12c

Epoch CNN CNNSA CNNDE CNNHS
Time Std.Dev. Time Std.Dev. Time Std.Dev. Time Std.Dev.
1 93.21 n/a 117.48 1.12 138.58 0.90 160.92 0.85
2 225.05 n/a 243.43 9.90 278.08 1.66 370.59 5.87
3 318.84 n/a 356.49 1.96 414.64 2.43 414.13 0.63
4 379.44 n/a 479.83 1.95 551.39 2.28 554.51 0.73
5 479.04 n/a 596.35 4.08 533.21 1.42 692.90 2.90
6 576.38 n/a 721.48 1.48 640.30 6.19 829.56 1.95
7 676.57 n/a 839.55 1.19 744.89 3.98 968.18 1.97
8 768.24 n/a 960.69 1.74 852.74 4.48 1105.2 1.39
9 855.85 n/a 1082.18 2.54 957.89 5.78 1245.54 4.96
10 954.54 n/a 1202.52 2.08 1373.1 1.51 1623.13 4.36
Table 2: Computation time and its Std.Dev. for design: i-2s-6c-2s-12c
Figure 3: Error and its Std.Dev. (i-6c-2s-12c-2s)
Figure 4: Computation time and its Std.Dev. (i-6c-2s-12c-2s)
Epoch CNN CNNSA CNNDE CNNHS
Acc. Std.Dev. Acc. Std.Dev. Acc. Std.Dev. Acc. Std.Dev.
1 80.09 n/a 86.36 0.76 84.78 1.24 87.23 0.57
2 89.04 n/a 91.18 0.25 91.63 0.30 92.15 0.55
3 90.98 n/a 93.56 0.20 93.67 0.17 93.69 0.31
4 92.27 n/a 94.69 0.16 94.86 0.43 94.63 0.20
5 93.17 n/a 95.51 0.12 95.57 0.04 95.30 0.16
6 93.79 n/a 96.23 0.08 96.20 0.14 95.80 0.25
7 94.74 n/a 96.52 0.08 96.52 0.32 95.71 0.24
8 95.22 n/a 96.95 0.07 96.68 0.19 96.40 0.13
9 95.54 n/a 97.18 0.08 97.10 0.00 96.84 0.27
10 96.05 n/a 97.35 0.02 97.32 0.04 96.77 0.04
Table 3: Accuracy and its Std.Dev. for design: i-2s-8c-2s-16c
Figure 5: Error and its Std.Dev. (i-8c-2s-16c-2s)
Figure 6: Computation time and its Std.Dev. (i-8c-2s-16c-2s)
Epoch CNN CNNSA CNNDE CNNHS
Time Std.Dev. Time Std.Dev. Time Std.Dev. Time Std.Dev.
1 145.02 n/a 175.08 0.64 289.54 0.78 196.10 1.35
2 323.62 n/a 353.55 2.69 586.96 12.56 395.43 0.60
3 520.16 n/a 614.71 12.10 868.82 4.02 597.391 1.83
4 692.80 n/a 718.53 31.05 1185.49 34.95 794.43 1.70
5 729.05 n/a 885.64 12.53 1451.64 4.99 1023.72 12.51
6 879.17 n/a 1051.30 1.30 1045.26 40.62 1255.93 32.54
7 1308.21 n/a 1271.03 25.55 1554.67 7.86 1627.30 64.56
8 1455.06 n/a 1533.30 8.55 15.39 106.75 1773.92 2251
9 1392.62 n/a 1726.50 32.31 1573.52 18.42 2123.32 95.76
10 1511.74 n/a 2054.40 35.85 2619.62 37.37 2354.90 87.68
Table 4: Computation time and its Std.Dev. for design: i-2s-8c-2s-16c
Figure 7: Error vs computation time for 100 epoch

All of the experiment results of the proposed methods are compared with the experiment result from the original CNN. These results for the design of i-6c-2s-12c-2s are summarized in Table 1 for accuracy, Table 2 for the computational time, Fig. 3 for error and its standard deviation as well as Fig. 4 for computational time and its standard deviation. The results for the design of i-8c-2s-16c-2s are summarized in Table 3 for accuracy, Table 4 for the computational time, Fig. 4 for error and its standard deviation as well as Fig. 5 for computational time and its standard deviation.

The experiments of original CNN are conducted at only one time for each epoch because the value of its accuracy will not change if the experiment is repeated with the same condition. In general, the tests conducted showed that to the higher epoch value, the better is the accuracy. For example in one epoch, compared to CNN (82.39), the accuracy increased to 5.73 for CNNSA (88.12), 3.91 to CNNDE (86.30), and 4.84 to for CNNHS (87.23). While in 5 epoch, compared to CNN (93.11), the increase of accuracy is 3.18 for CNNSA (96.29), 2.04 for CNNDE (94.15), and 1.78 for CNNHS (94.89). In the case of 100 epoch, as shown in Fig.6, the increase in accuracy compared to CNN (98.65) is only 0.16 for CNNSA (98.81), 0.13 for CNNDE (98.78), and 0.09 for CNNHS (98.74).

The experiment results show that CNNSA presents the best accuracy for all epoch. Accuracy improvement of CNNSA, compared to the original CNN, varies for each epoch, with a range of values between 1.74 (9 epoch) up to 5.73 (1 epoch). The computation time for the proposed method, compared to the original CNN, is in the range of 1.01 times (CNNSA, two epoch: 246/244) up to 1.70 times (CNNHS, nine epoch: 1246/856).

In addition, we also test our proposed method with CIFAR10 (canadian institute for advanced research) data-set. This dataset consists of 60.000 color images, in which the size of every image is 32x32. There are five batches for training, composed of 50.000 images, and one batch of test images consist of 10.000 images. The CIFAR10 dataset is divided into ten classes, where each class has 6.000 images. Some example images of this dataset are showed in Fig.8 as follow.

Figure 8: Example of some images from Cifar data-set

The experiment of CIFAR10 dataset was conducted in MATLAB-R2014a. We use the number of epoch 1 to 15 for this experiment. The original program is MatConvNet from [19]. In this paper, the program was modified with SA algorithm. The results can be seen in Fig. 9 for objective, Fig.10 for top-1 error, and Fig.11 for a top-5 error. In general, these results show that CNNSA works better than original CNN for CIFAR10 data-set.

Epoch CNN CNNSA
Objective Top-1 error Top-5 error Objective Top-1 error Top-5 error
1 1.4835700 0.10676 0.52868 0.009493 0.00092 0.00168
2 1.0443820 0.03664 0.36148 0.013218 0.00094 0.00188
3 0.9158232 0.02686 0.31518 0.010585 0.00094 0.0017
4 0.8279042 0.02176 0.28358 0.008023 0.00096 0.00188
5 0.7749367 0.01966 0.26404 0.009285 0.00106 0.00186
6 0.7314783 0.01750 0.25076 0.013674 0.00102 0.00175
7 0.6968027 0.01566 0.23968 0.117740 0.0009 0.00168
8 0.6654411 0.01398 0.22774 0.011239 0.0011 0.0018
9 0.6440073 0.01320 0.21978 0.011338 0.00106 0.0018
10 0.6213060 0.01312 0.20990 0.009957 0.00116 0.0019
11 0.6024042 0.01184 0.20716 0.008434 0.00096 0.00176
12 0.5786811 0.01090 0.19954 0.009425 0.0011 0.00192
13 0.5684009 0.01068 0.19548 0.012485 0.00082 0.0018
14 0.5486258 0.00994 0.18914 0.012108 0.00098 0.00184
15 0.5347288 0.00986 0.18446 0.009675 0.0011 0.00186
Table 5: Comparison of CNN and CNNSA for train
Epoch CNN CNNSA
Objective Top-1 error Top-5 error Objective Top-1 error Top-5 error
1 1.148227 0.0466 0.3959 0.034091 0.0039 0.0087
2 0.985902 0.0300 0.3422 0.061806 0.0044 0.0091
3 0.873938 0.0255 0.2997 0.054007 0.0050 0.0091
4 0.908667 0.0273 0.3053 0.054711 0.0051 0.0091
5 0.799778 0.0226 0.2669 0.043632 0.0044 0.0091
6 0.772151 0.0209 0.2614 0.071143 0.0057 0.0091
7 0.784206 0.0210 0.2593 0.065040 0.0050 0.0095
8 0.732094 0.0170 0.2474 0.048466 0.0061 0.0095
9 0.761574 0.0217 0.2532 0.056708 0.0056 0.0091
10 0.763323 0.0207 0.2515 0.044423 0.0048 0.0086
11 0.720129 0.0165 0.2352 0.047963 0.0041 0.0087
12 0.700847 0.0167 0.2338 0.063033 0.0055 0.0087
13 0.729708 0.0194 0.2389 0.068989 0.0052 0.0096
14 0.747789 0.0192 0.2431 0.056425 0.0049 0.0091
15 0.723088 0.0182 0.2355 0.052753 0.0052 0.0095
Table 6: Comparison of CNN and CNNSA for validation
Figure 9: CNN vs CNNSA for Objective
Figure 10: CNN vs CNNSA for Top-1 error
Figure 11: CNN vs CNNSA for Top-5 error

5 Conclusion

This paper shows that SA, DE and HS algorithms improve the accuracy of the CNN. Although there is an increase in computation time, nevertheless error of the proposed method is smaller than the original CNN for all variation of the epoch.

It is possible to validate the performance of this proposed method on other benchmark datasets such as ORL, INRIA, Hollywood II, and ImageNet. This strategy can also be developed for other metaheuristic algorithms such as ACO, PSO, and BCO to optimize CNN.

For the future study, metaheuristic algorithms applied to the other DL methods need to be explored, such as the recurrent neural network, deep belief network, and AlexNet (a newer variant of CNN).

References

  •  1. I. Boussaid, J. Lepagnot, and P. Siarry. A survey on optimization metaheuristics. Information Science, 237:82 – 117, 2013.
  •  2. El-Ghazali Talbi. Metaheuristics From Design to Implementation. John Wiley & Sons, Hoboken, New Jersey, 2009.
  •  3. P. O. Glauner. Comparison of training methods for deep neural networks. Master Thesis, 2015.
  •  4. G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural network. Science, 313:504–507.
  •  5. S. Kirkpatrick, C. Gelatt, and M. Vecchi. Optimization by simulated annealing. Science, New Series, 220(4598):671–680, 1983.
  •  6. A. Krizhhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with eep convolutional neural networks. in Proc. Advances in Neural Information Processing Systems 25, Lake Tahoe, Nevada, 2012.
  •  7. Y. LeCun, K. Kavukcuoglu, and C. Farabet. Convolution nework and applications in vision. in Proc. IEEE International Symposium on Circuit and Systems, pages 253–256, 2010.
  •  8. K. S. Lee and Z. W. Geem. A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice. Comput. Methods Appl. Mech. Engrg, 194:3902–3933, 2005.
  •  9. Q. Lee, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, and A. Ng. On optimization methods for deep learning. in Proc. The28th International Conference on Machine Learning, Bellevue, WA, USA, 2011.
  •  10. Li Deng and Dong Yu. Deep Learning: Methods and Application. Foundation and Trends in Signal Processing, Redmond, WA 98052; USA, 2013.
  •  11. LISA.lab. Deep learning tutorial release 0.1. University of Montreal, Canada, 2014.
  •  12. J. Martens. Deep learning via hessian-free optimization. in Proc. The27th International Conference on Machine Learning, Haifa, Israel, 2010.
  •  13. M. M. Najafabadi and et. al. Deep learning applications and challenges in big data analytics. Journal of Big Data, pages 1–21, 2015.
  •  14. N. Noman, D. Bollgala, and H. Iba. An adaptive differential evolution algorithm. IEEE Evolutionary Computation, pages 2229–2236, 2011.
  •  15. O.Vinyal and D. Poyey. Krylov subspace descent for deep learning. in Proc. The15th International Conference on Artificial Intelligent and Statistics (AISTATS), La Palma, Canada Island, 2012.
  •  16. R. Palm. Master thesis: Prediction as a candidate for learning deep hierarchical model of data, 2012.
  •  17. L. M. R. Rere, M. I. Fanany, and A. M. Arymurthy. Simulated annealing algorithm for deep learning. Procedia Computer Science, 72:137–144, 2015.
  •  18. J. L. Sweeney. Deep learning using genetic algorithms. Master Thesis, 2012.
  •  19. A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for matlab.
  •  20. Xin-She Yang. Engineering Optimization: an introduction with metaheuristic application. John Wiley & Sons, Hoboken, New Jersey, 2010.
  •  21. Yoshua Bengio. Learning Deep Architecture for AI, volume 2:No.1. Foundation and Trends in Machine Learning, 2009.
  •  22. Y. Zhining and P. Yunming. The genetic convolutional neural network model based on random sample. ijunesstt, 8(11):317 – 326, 2015.