DeepAI
Log In Sign Up

Genetic Programming and Gradient Descent: A Memetic Approach to Binary Image Classification

09/28/2019
by   Benjamin Patrick Evans, et al.
0

Image classification is an essential task in computer vision, which aims to categorise a set of images into different groups based on some visual criteria. Existing methods, such as convolutional neural networks, have been successfully utilised to perform image classification. However, such methods often require human intervention to design a model. Furthermore, such models are difficult to interpret and it is challenging to analyse the patterns of different classes. This paper presents a hybrid (memetic) approach combining genetic programming (GP) and Gradient-based optimisation for image classification to overcome the limitations mentioned. The performance of the proposed method is compared to a baseline version (without local search) on four binary classification image datasets to provide an insight into the usefulness of local search mechanisms for enhancing the performance of GP.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

08/25/2017

Evaluation of Deep Learning on an Abstract Image Classification Dataset

Convolutional Neural Networks have become state of the art methods for i...
04/03/2017

A Genetic Programming Approach to Designing Convolutional Neural Network Architectures

The convolutional neural network (CNN), which is one of the deep learnin...
12/17/2020

Learning and Sharing: A Multitask Genetic Programming Approach to Image Feature Learning

Using evolutionary computation algorithms to solve multiple tasks with k...
10/30/2017

Evolving Deep Convolutional Neural Networks for Image Classification

Evolutionary computation methods have been successfully applied to neura...
11/10/2020

A Soft Computing Approach for Selecting and Combining Spectral Bands

We introduce a soft computing approach for automatically selecting and c...
12/06/2018

Comparative Document Summarisation via Classification

This paper considers extractive summarisation in a comparative setting: ...

Code Repositories

ConvGP

Convolutional Genetic Programming method for image classification


view repo

1 Introduction

Image classification is an important area of computer vision, with a wide range of applications. Existing approaches to image classification tend to suffer from at least one of the following three downfalls: low accuracy, low interpretability, or require human input (for example a manually defined architecture).

Conventional neural networks (CNNs) are the current state-of-the-art method for image classification and were first introduced in 1998 [lecun1998gradient], however, saw a resurgence in popularity after the success of AlexNet [krizhevsky2012imagenet]

at the 2012 ImageNet Large Scale Visual Recognition Competition

[ILSVRC15]. Since the release of AlexNet, every winner of the ImageNet challenge has used a CNN of some sort [ILSVRC15]. 2013 was Clarifai, 2014 was VGGNet [simonyan2014very], 2015 was ResNet [he2016deep], 2016 was CUImage and finally 2017 was BDAT. From 2015, CNNs began outperforming human professionals. The key idea behind CNNs is the shared weight architecture, where filter coefficients are learnt through backwards propagation and convolved across an image. Other key developments which made this possible are pooling layers for reducing dimensionality (although these still remain controversial [hinton2011transforming]

), and new activation functions, e.g., rectified linear units (ReLU) being the most common to avoid the vanishing gradient problems as networks become deeper. The benefits of CNNs are clear from the high performing results.

There are, however, some limitations with CNNs. The main limitations are that architectures are often manually crafted, and models typically have low interpretability. Various methods have been proposed to overcome these limitations, such as evolving CNN architectures [desell2017large, suganuma2017genetic]. However, this has a huge computational cost associated. Likewise, steps have been taken to improve the interpretability of CNNs, such as DeconvNets [zeiler2014visualizing] that allow visualisation of intermediary layers, and saliency maps [simonyan2013deep] that visualise the gradient of an output class with respect to an input image. However, the fully connected layers (and thus the feature relations) still remain relatively uninterpretable.

Genetic programming (GP) is another method which can be used for image classification and can help overcome some of the aforementioned limitations. GP is an evolutionary computation (EC) technique that mimics the principles of natural selection and survival of the fittest to automatically search the solution space for a user-defined problem. One key benefit here is the architecture of solutions is determined automatically, removing the need for human intervention.

In our previous work [previous]

, we proposed a novel method for binary image classification which utilised principles from both GP and CNNs. We found the results to be competitive to general classification algorithms, however, there was still room for improvement as the method did not outperform CNNs. In this work, we look at the effects of incorporating a local search mechanism into the evolutionary process, similar to the updating scheme used in CNNs (backpropagation with gradient descent) as a method of fine-tuning individuals.

Goals

Specifically, the goal is to assess whether local search can improve the quality of solutions found with evolution alone. The objectives are to:

  • Integrate local search (as gradient descent) into the evolutionary process as a fine-tuning operation

  • Investigate whether local search combined with evolution can help to improve the solutions over utilising evolution alone

  • Analyse a good performing solution to gain an understanding of the performance

2 Background

The general area of combining EC methods with local search is referred to as memetic algorithms, and the idea of combining local search mechanisms specifically with GP has been looked at in [trujillolocal].

While EC methods such as GP are known as Gradient-free optimisation methods, when available, gradient information can be incorporated as a supplement to the evolutionary process. This has been looked at in [zhang2004genetic, smart2004applying], where gradient descent is used as a local search mechanism in GP for classification. However, the function set is limited to the four basic arithmetic operators (, , , and ) and the method is not tailored for image classification. It was found local search + GP outperformed standard GP on all datasets trialled.

A different strategy is tested in [emigdio2015local]. While [zhang2004genetic] evolved numeric terminals with gradient descent, [emigdio2015local] assigns a weighted coefficient to each node in a tree. Hence, rather than the output of a node being the output is now and is periodically optimised using the Trust Region algorithm. Again, it was shown that in most cases GP with local search significantly outperformed standard GP. It was also noted that the local search did not decrease the interpretability of the evolved models, in fact, resulted in smaller trees overall than standard GP.

The process of Local search within GP is referred to as “lifetime learning” in [azad2014simple], analogous to how continual development happens to an individual in nature. The method (named Chameleon), works on the functions/nodes in the tree. However, rather than associating a weight with each node as in [emigdio2015local], Chameleon searches for replacement nodes from the function set (nodes with matching input, output types and arity) for each node in the tree (starting at the root node). This works similar to an “exhaustive mutation” for a node, where all the allowable variations are trialled. Again, the proposed method (with local search) was found to outperform standard GP on the datasets trialled.

The focus of this work will be to incorporate gradient descent as a supplement to the evolutionary process in our work from [previous] and to analyse how this local search effects the resulting individuals.

3 New Method

3.1 Program Structure

While the structure remains the same as proposed in [previous]

, it is briefly outlined here for completeness. There are three tiers in the program structure. Tier 1 (the bottom of the tree) contains the raw image as input and optional convolution and pooling operations. Tier 2 is the aggregation tier, this works over a region of the image and applies a statistical measure such as minimum, maximum, mean and standard deviation, over this region. Tier 3 is a classification tier, which consists of the basic arithmetic operators

, , and protected (returns 0 if the denominator is 0). The only compulsory tier is the aggregation tier as this converts an image to a numeric output, whereas tiers 1 and 3 are flexible and can be of different height restricted only by the overall tree size. An example tree highlighting the structure is given in Figure 1.

Figure 1: Example tree showing the proposed tiered architecture.

Table 1 summarises the content of the function set, where the input, output and a brief description are provided for each function. The terminals are the raw input image, size and position information for the aggregation window (both specified as a percentage of the image width and height), shape of the aggregation window (either rectangle, column, row or ellipse), a kernel coefficient selected uniformly from the range -1..+1 (to compose the initial values for the 33 filters), and an optional random number for the classification tier. A full explanation of these values is given in [previous].

Function Input Output Description
double double Performs the corresponding arithmetic operator addition, subtraction, multiplication and protected division.
(image, shape, size, position) double Applies the statistical measure (minimum, maximum, mean and standard divination) over region (aggregation window) of the image.
convolve (image, filter) image Applies the filter over the input image (uses “valid” convolution), and then applies ReLU activation.
pool image image Applies 2

2 max pooling to the input image.

Table 1: Function set

3.2 Fine Tuning with Gradient-Based Optimisation

One nice property of the tree structure used is that individuals can be directly represented as nested function calls. For example, a tree can be represented by

(1)

where (an image) is the only input to the tree/function (), however, if we also consider the filters as parameters to this function, the problem can be represented as

(2)

where represents the input images, represents the filter values/coefficients, represents the true class labels for each , is the activation function used (in this case sigmoid), and

is some differentiable loss function. Since the loss function and each function used in the tree are differentiable, this can now be represented as a differentiable optimization problem, where we are trying to find

which minimises by computing .

The loss function measures how well we are performing, similar to the fitness function for GP. The classification accuracy was used as a fitness function for GP. However, classification accuracy is notoriously poor as a loss function for gradient optimization, so we instead use the Cross-Entropy (CE) loss. Mathematically, CE (for binary classification) is defined as

(3)

where indexes the training examples, is the total number of such examples,

represents the target output, and

the predicted output from the tree.

This is averaged over each training example as the loss is computed in batches (discussed below), and the size of the batch should not affect the magnitude of the loss.

To compute

, the chain rule can be applied repeatedly in a manner similar to backwards propagation in neural networks

[bprop]. In order to do this, it is necessary that each function be differentiable with respect to the parameters of interest. This is the case for all functions in the tree, and the partial derivatives for each function are given in Appendix 0.A.

3.3 Local Search Integration

To find values of which minimize

, gradient descent was used. As the process is run several times throughout the evolutionary process, efficiency is important. Therefore, Stochastic Gradient Descent (SGD) with mini-batches was used in favour of Vanilla Gradient Descent. The algorithm is simple as shown in Pseudocode in Algorithm

1.

1:function SGD(,)
2:     while  do
3:          for each  do
4:               
5:               
6:          end for
7:     end while
8:end function
Algorithm 1 Batch SGD Function

Even with mini-batches, the process of gradient descent is relatively slow, as is the overall evolutionary process of GP. Hence, it becomes too large of a computational burden to apply SGD to every generation or to every individual in the population. Three separate methods were trialled, a method without local search (referred to as ConvGP throughout), a method with local search on the 25 fittest individuals every 10th generation starting from the first generation (called ConvGP+LS), and a method (ConvGP+LSE) which does the same as ConvGP+LS, however, on the final generation SGD is run for 100 epochs on the top performer as a “fine-tuning” operation (rather than the standard 10 epochs). The reason for this extended training is since this is the final generation the top evolved program will be used as the final model, therefore, the trade-off for extra computational time will likely be worth the improved filter values.

The evolutionary process works to explore the large search space of potential models (exploration of architectures), the local search is then used to improve upon the top performers found (exploitation of the filter coefficients to minimise ) every 10 generations. ConvGP+LSE exploits this further, by running an extended local search on the final generation.

4 Experiment Design

Four widely used image datasets have been used for comparison, from a variety of domains with varying complexities.

  • Hands. The hand posture dataset from [triesch1996robust] was used. The classes a (closed fists) and b (open hands) were used. Only images with the ”light” background were used.

  • JAFFE. The Japanese female facial expression dataset from [cheng2010facial] was used. Happy and Sad classes were chosen for comparison.

  • Office [gopalan2011domain]. The office dataset features several classes, from several different ”domains”. For this work, the mobile phone and calculator classes from the webcam domain were used, as the two can look similar making the task of binary classification relatively complex. Images were scaled to 64x64 pixels.

  • CMU Faces [mitchell1997machine]. For this work, the happy and sad classes were used. Each image also comes in three sizes, only the ”small” sized images were used (despite the method working for variable sized images). While featuring similar classes as the JAFFE dataset, the two are quite different in both quality of images and complexity of classification. For example, with the CMU Faces dataset individuals can be wearing sunglasses or facing various directions.

To ensure a fair comparison, the exact same training: test splits were used for each of the methods. Three separate seeds were used to shuffle the data, and 30 evolutionary seeds were used for each shuffle (for a total of 90 runs).

4.1 Parameters

Again, to ensure a fair comparison, the parameter settings of the various methods were kept consistent. The values are summarised in Table 2, the parameters in italics are only relevant for the methods with local search. Note: both the local search methods and the base method had the same number of generations, however, the local search methods run for a longer time due to the inclusion of gradient descent, so training time was not matched.

Parameter Value Parameter Value Parameter Value
Population Size 1024 Crossover Rate 75% Epochs 10
Generations 50 Mutation Rate 20% Number of best 25
Tree Size 2 - 10 Reproduction Rate 5% Learning Rate 0.5
Tournament Size 7 Batch Size 10% of training data
Table 2: GP Parameter Values

4.2 Performance Measures

As all datasets used had an equal distribution of images in each class, the raw classification accuracy can be used as a performance measure for evaluation. The classification accuracy formula is given in 4, this was also used for the fitness function of the program. It is important to note that classification accuracy was not used for the local search, for reasons outlined in Section 3.2.

(4)

Here TP = true positives, FP = false positives, TN = true negatives and FN = false negatives.

5 Results and Discussions

Accuracy (%) Time
Method Training Testing Training (m) Testing (ms)
Hands ConvGP 100.00.00 95.885.32 4.394.11  85.13110.24
ConvGP+LS 100.00.00 96.395.77 5.405.38 77.0193.68
ConvGP+LSE 100.00.00 94.807.00 76.3895.19 108.97140.59
JAFFE ConvGP 99.031.96 80.278.29 40.9247.34 142.12187.70
ConvGP+LS 98.246.50 81.458.04  79.24111.47 145.70216.87
ConvGP+LSE 97.387.51 81.779.36 159.55110.61 121.07187.81
Office ConvGP 96.203.43 69.909.46 28.5522.55 66.6169.17
ConvGP+LS 92.8610.1 67.1910.6 55.9226.84 38.7740.70
ConvGP+LSE 94.779.04 67.9010.4 138.02187.10 66.9250.31
Faces ConvGP 69.024.20 44.552.91 185.04194.76 393.41484.49
ConvGP+LS 68.024.68 44.403.28 290.15286.55 259.41356.09
ConvGP+LSE 67.813.57 45.373.12 226.61134.79 165.64205.36
Table 3: Results of the various methods on four datasets.

Table 3 summarize the results on the datasets trialled. Results are presented as

. On all four datasets, there was no statistically significant difference in testing accuracy using an unpaired Welch t-test with a significance level of

. This shows the base ConvGP method is able to evolve good kernel coefficients through evolution alone. The inclusion of local search did not provide any significant improvements in terms of test accuracy, this could be due to one of several main reasons: local search not run for long enough to see improvement, kernel values are in a local optima so local search sees no improvement, potentially overfitting to the training data, or an inappropriate learning rate used.

Hands

The hands dataset is relatively trivial, as the backgrounds are all plain white, and the two classes used (open and closed) are relatively distinctive. All methods achieved similar results, reaching peak fitness (1.00 = 100% accuracy) around 5 generations in. Only one round of local search was run (on the first generation), as training stops once an individual reaches the maximum achievable fitness. A small improvement was seen with ConvGP + LS over the base ConvGP, although this was not statistically significant. The increased time taken for the ConvGP + LSE method also makes this a poor candidate for such a simple task, as no benefit was seen from the increased exploitive ability. This shows for simple tasks, there was no benefit to utilising local search in addition to evolution.

(a) Hands
(b) JAFFE
Figure 2: Average fittest individual
Jaffe

With the JAFFE dataset, a small (but not statistically) significant improvement is seen with the local search methods over the base method. When analyzing the average fittest individuals across the runs, we can see each time gradient descent is run (indicated by the light blue vertical bars every 10 generations) in Figure 1(b) there is no drastic improvement in subsequent fitness values. This shows with a more exploitative local search (a larger number of epochs), we would likely see additional improvements over the base ConvGP method as currently there is no large impact on the fitness immediately after running.

Office

With the Office dataset, the base ConvGP method actually saw the highest testing accuracy of the tree, although again, this was not statistically significant. While the accuracies were statistically equivalent, the base ConvGP method saw drastically faster training times than the two local search methods. Analysing generational fitness showed similar trends for all three methods, so for this particular dataset local search showed no improvement.

CMU Faces

Interestingly with the CMU faces dataset, the highest average training accuracy is seen with the baseline ConvGP method. However, for testing accuracy, the ConvGP + LSE method performed the best, although this improvement was not statistically significant (p = 0.09). As expected, the local search methods had slower training time than the base ConvGP method, however, the local search methods resulted in faster testing times. The reasoning for this is looked at in more detail in Section 6.

6 Further Analysis

Although the difference between the two methods was not statistically significant in terms of test accuracy, there were other additional benefits to the method which incorporated local search.

When comparing the average tree size on the JAFFE dataset (Figure 2(a)) and Faces dataset (Figure 2(b)) for each generation across the runs, we can see in both cases the local search methods result in smaller trees on average than ConvGP (a similar trend was seen in [emigdio2015local]). This is likely because we are able to achieve with a single convolution node what would previously require stacked convolutions. It is interesting to note that unlike in [emigdio2015local], there was no preference for selecting smaller trees when performing gradient descent, so this was an unexpected byproduct of the local search mechanism. This can also be seen when comparing testing time, in many cases, the local search methods result in faster testing times than the base method (at the expense of increased training time).

(a) JAFFE
(b) CMU Faces
Figure 3: Average size of individuals

Finally, to show the interpretability (and often simplicity) of the automatically evolved models, an example is given from the Hands dataset in Figure 4

. This tree was able to achieve 100% training accuracy. This also shows the flexibility of the proposed structure, while a classification tier is defined, in this instance, no classification tier was required as the output of the aggregation tier was discriminative enough to fully distinguish between the two classes. We can see a key region was identified (where the fingers are present in the open hand images), and high contrast filter values have been learnt. This high contrast, along with the pooling of the image, means fingers show up as completely black (pixel value intensity of 0), whereas the background is a darker grey. The aggregation function used is ”minimum”, so in the case of fingers in the aggregation window, the value will be 0, and when fingers are not present this will be a grey (in this case 0.1889). The output values are then fed through a sigmoid function, and values

0.5 are class zero, and values 0.5 are class 1.

Figure 4: A visualisation of a top performing individual on the Hands dataset

7 Conclusions and Future Work

7.1 Conclusions

In this work, we looked at the effect of incorporating local search for exploitation of the top performers in a genetic population. It was shown that while the methods which incorporated local search did not see improvement over the raw GP method in terms of classification accuracy, there were other benefits such as reduced tree size on average. With more exploitative local search mechanisms, it is likely an improvement could also be seen in classification accuracy. This shows the benefits of such hybrid (memetic) algorithms for image classification problems. Furthermore, the proposed method was able to achieve the intended goals of achieving high accuracy (as evident by the test result on the datasets trialled), high interpretability as demonstrated in the further analysis section, and the architecture of the solution was able to be evolved automatically rather than manually crafted.

7.2 Future Work

As discussed, the local search showed equivalent testing accuracy as the baseline proposed method. With adequate adjustments, local search may be able to significantly improve the baseline method. Namely, the learning rate is currently a manually specified parameter. There has been work done on adaptive learning rates for CNNs, such as Adagrad [duchi2011adaptive] and Adadelta [zeiler2012adadelta], implementing such methods here would remove the need for a manually specified learning rate and also ideally see improved performance. Future work could also look to perform a more exhaustive search over the number of epochs for each local search process, as this was computationally prohibitive here due to training time, however, it is likely an increase in epochs would show an improvement in classification performance. Finally, to prevent any over-fitting occurring with an increased number of epochs, a validation set should be introduced to track how the loss is progressing throughout the epochs and stop when this loss no longer improves.

Another current downfall is the training time. This could be remedied in the future, by optimising parts of the program. For example, tree evaluation and local search are both performed sequentially, however, these could, of course, be run in parallel as there is no interaction between the trees. Doing so would result in drastic speedups for the training time. Likewise, no GPU was used for training, an updated program could make use of GPU optimisation for example for the convolutions as shown in [podlozhnyuk2007image], which again would result in a drastic speedup.

References

Appendix 0.A Partial Derivatives

The derivatives are given in terms of a single training example for clarity, to avoid subscripts and summations throughout.

As discussed, the output of the tree and the target output was used to compute the loss with the cross-entropy function. Using this information, the partial derivative w.r.t the output of the program can be simplified to

(5)

where is the output of the tree (pre activation), is the target output, and is the predicted output (post activation with sigmoid). To show this, we can compute the derivative w.r.t to the loss functions input as

(6)

since the input goes through a sigmoid activation, we can substitute in the derivative for the sigmoid function as

(7)

and with the chain rule

(8)

since , this simplifies to

(9)

The convolution tier has two functions, convolution and pooling. These functions behave in the same manner as when used in CNNs, however, the derivative calculations are often abstracted away with most modern CNN frameworks. Hence, the formulae have been outlined below (a complete description is given in [grad-notes]).

Convolution takes two inputs, an image and a filter. We are interested in the partial derivative w.r.t. each of these inputs. When computing the gradient of a convolution, the result is also a convolution. Firstly, computing the derivative w.r.t a single value/weight in the filter . If the input image has height +1, and width +1 then

(10)

where and specify a row and a column, specifies the weight matrix/filter, and is the result/output. The summation is over the entire image, as each weight in the filter effects every pixel in the image. The first value of the summation is known from the parent node in the tree, but then second value must be calculated as

(11)

where is the original/input image. Substituting this back into Equation 10 gives

(12)

which itself is a convolution, therefore can be more succinctly represented as

(13)

Here, we see the result is a convolution between two matrices, giving the gradient for an individual weight in the filter.

Now, w.r.t to the input image , assuming the kernel has height +1 and width +1

(14)

where the summation is over all the affected pixels, which are the neighbouring pixels within the filters window. Again, the first value of the sum is given from the parent node, but the second value must be computed as

(15)

which simplifies greatly when deriving to

(16)

Hence, now Equation 14 can be rewritten as

(17)

which again can be represented as a convolution, simply by flipping the weight matrix giving

(18)

Therefore, the partial derivative for the image and filter are both able to be computed using convolutions.

Pooling is much simpler. Pooling takes a single input (the image), assuming the stride and size are fixed as is the case here. The derivative w.r.t this image is then a matrix of the same shape as the image, where non-maximum pixels have a derivative of zero and max-pixels have a derivative of one (as only max pooling was used in this study). Formally,

(19)

where is the pooling function, is the th pixel, and is the pooling window.

The aggregation function takes five parameters, an image, a shape, a and position, and a width and height. As we are only interested in updating the filter values, the last four parameters can be safely ignored as the partial derivative w.r.t these terminals will not be used as they represent fixed terminals. The partial derivative w.r.t to the image will be a matrix of the same shape as the input image. All values outside the aggregation window will have a value of 0, with values inside (depending on the aggregation function being used) having a value of 1. However, the convolution operator works over the image as a whole, and we can not control the filter values only for specific regions of the image. This means for efficiency the prior gradient can, in fact, just be passed directly through the aggregation function (treating the gradient for the aggregation functions as 1 since the gradient is being multiplied) when propagating the gradients down the tree, rather than computing the real gradient.

Remembering the functions used in the classification tier are just the basic arithmetic functions ( ) operating on two floating point inputs, the partial derivative w.r.t either of the inputs will be a scalar value. Table 4 summarizes these partial derivatives.

Function
Table 4: Arithmetic operator partial derivatives