It is well-known that building robust and efficient supervised deep learning networks requires large amounts of quality data, especially when the involved classes share many visual features. Recent technological advances in camera sensors and the potentially unlimited data provided by internet have helped gathering large volumes of images imageNet ; coco_dataset ; cifar . However, labelling such amounts of data is still manual and costly. In consequence, a large number of problems has still to deal with very small labeled datasets and hence, the resulting classification models are usually unable to generalise correctly to new unseen examples, this problem is known as overfitting.
In general, the issue of overfitting in supervised networks is addressed either from the model or data point-of-view. In the former, transfer learning or diverse regularization techniques are employed. In the later, increasing the volume of the training set using data augmentation strategies alexnet ; intro_dataAug2 ; intro_dataAug3 are considered. For highly sensitive purposes, data augmentation strategies, if not chosen correctly, can potentially change meaningful information resulting in ill-posed training data. Instead of manually selecting data augmentation techniques, recent works showed that learning these transformations from data can lead to significant improvements in the generalization of the models.
Some approaches train the generator of a GAN (Generative Adversarial Network) to learn the suitable augmenting techniques from scratch learn_dataAug ; intro_learnAug3 while others train the generator to learn finding the optimal set of augmenting techniques from an initial space of data augmentation strategies learn_dataAug_gans . The downside of the aforementioned approaches is that the selected transformations are applied to all samples in a training set without taking into account the particularities of each class.
The present work proposes FuCiTNet approach for learning class-inherent transformations in each class within the dataset under evaluation. FuCiTNet is inspired by GANs, it creates a number, , of generators equal to the number of classes. Each generator, , with , will be entrusted to learn the features of a specific -class space. When a sample is fed into the system, it is broadcasted to every generator producing
transformed images, each of which is fed to the classifier which predicts a label with certain error. The error is transferred back to the entrusted generator (specified by the input’s label ground-truth) indicating the amount of change the class transformation must be altered to meet the classifier requirements. The final prediction of the trained classification model will be calculated based on the fusion of the N different output predictions.
The contributions of this work can be summarized as follows:
We propose class-inherent transformation generators for improving the generalization capacity of image classification models, especially appropri- ate for problems in which the involved classes share many visual features. Our approach, FuCiTNet, creates as many generators as the number of classes N. Each generator, , learns the inherent transformations that bring the input image from its space to the k-class space. We introduce a classification loss in the generators to drive the learning of specific k-class transformations.
The final prediction of the classification model is calculated as a fusion of the N output scores. The source code of FuCiTNet will be available in Github after acceptation
Our experiments demonstrate that class-inherent transformations produce a clearer discrimination for the classifier yielding better generalisation performance in three small datasets from two different fields.
This paper is organized as follows. A summary of the most related works to ours are reviewed in Section 2. A description of FuCiTNet model is provided in Section 3. Experimental framework is provided in Section 4. Results and analysis are given in Section 5 and finally conclusions and future work in Section 6.
2 Related Work
Improving the performance of supervised deep neural networks in image classification is still ongoing research. To reach high accuracies the model needs to generalise robustly to unseen cases to eventually avoid overfitting. This is addressed using several approaches.
The most popular approach is data augmentation. It was firstly introduced by Y. LeCun et al. first_data_aug by randomly applying these transformations to the training dataset: shearing, random cropping and rotations. With the revolution of CNNs alexnet , novel transformations appeared, such as horizontal flipping, changes in intensity, zooming, noise injection etc. With data augmentation, most CNN-based classifiers reduce overfitting and increase their classification accuracy.
Dropout, a well known regularization technique, is also used for improving generalization. It was first introduced by Srivastava et al. dropout . The key idea is to randomly drop units (along with their connections) from the neural network throughout the training process. This prevents units from co-adapting too much to the data. It significantly reduces overfitting and gives major improvements over other regularization methods.
The emergence of GANs gans has led to promising results in image classification. They were useful for generating synthetic samples to increase small datasets where the number of samples per class was low eventually introducing variability for the generalisation of the classification models. The downside of GANs is that its latent space converges if there exists a fair good amount of images to train. Generating synthetic samples in small and very small dataset is still an open issue.
For small and very small datasets, the problem of overfitting is even greater. Inspired by GANs, the authors in learn_dataAug designed an augmentation network that learns the set of augmentations that best improve the classifier performance. The classifier tells the generator network which configuration of image transformations prefers when distinguishing samples from different classes. In the same direction, the authors in learn_dataAug_rl ; learn_dataAug_gans
address a similar problem using Reinforcement Learning and adversarial strategy respectively. The former chooses from a list of potential transformations which one is the best suited through augmentation policies. The latter combined the list of transformations to synthesize a total image transformation. The present work is different to all the previously cited works in that it proposes a new approach for learning transformations inherent to each particular class based on the classifier requirements eventually forcing the classes to be as distinguishable as possible from each other.
3 FuCiTNet approach
Inspired by GANs, FuCiTNet learns class-inherent transformations. Our aim is to build a generator that improves the discrimination capacity between the different classes. GANs use the so called adversarial loss which optimizes a min-max problem. The generator, , tries to minimize the following function while the discriminator, , tries to maximize it:
Whereis real. is the expected value over all real data instances. is the generator’s output given a noise . is the discriminator’s estimate of the probability that a fake instance is real. is the expected value over all random inputs to the generator. The formula derives from the cross-entropy between the real and generated distributions.
In this work, the adversarial concept is slightly changed. Instead of using a discriminator that focuses on maximizing the inter-class probability of the image belonging to the data or latent distribution, we use a classifier that minimizes the loss of of belonging to a particular dataset class. As the images produced by the generator do not need to be similar to its ground truth, we restrict the generator latent space in a different way by replacing with the classification loss of , which will hallucinate samples with specific enhanced features to meet the classifier requirements improving the classification accuracy.
Instead of building one single generator , we create an array of generators, with . Where is equal the number of object classes. Each generator is in charge of learning the inherent features of that specific k-class until becomes part of the k-class space. In other words, each generator will learn the transformations that map the input image from its own i-class domain to the k-class domain, with . The flowchart diagram of FuCiTNet is depicted in Figure 1.
The architecture of our generators , with , consists of 5 identical residual blocks residual_nets . Each block has two convolutional layers with
kernels and 64 feature maps followed by batch-normalization layersbatch_norm and ParametricReLU param_relu
as activation function. The last residual block is followed by a final convolutional layer which reduces the output image channels to 3 to match the input’s dimensions.
Our classifier is a ResNet-18 which consists of an initial convolutional layer with kernels and 64 feature maps followed by a max pool layer. Then, 4 blocks of two convolutional layers with kernels with 64, 128, 256 and 512 feature maps respectively followed by a average pooling and a fully connected layer which outputs arelu .
The classifier uses a standard cross entropy loss (). Given a batch size of input images with their respective ground truth labels, :
The true label distribution for the image is depicted by and indicates the predicted label distribution for the transformed image .
All generators use the same loss inspired by SRGAN . For the generator with the respective loss is indicated by Formula (3). It consists of a multiple term loss constituted by a pixel-wise MSE term, a perception MSE term and the classifier loss. Adding this classifier loss to the generators is one of the main novelty of our method. The classification loss is added to the generator loss with a weighted factor indicating how much the generator must change its outcome to suit the classifier. It is worth noting that, for each input image, the classification loss is only transferred to a particular generator indicated by the input’s ground-truth label.
The similarity term () keeps the generator from changing the image too much while the classifier loss drives the output away from the input and close to the k-class feature space.
The pixel-wise MSE is obtained by performing a regular L2-norm between each pixel in the input image and the generated image.
The perceptual MSE assesses similarity between and by feeding each of them into a pretrained VGG-16 vgg . The euclidean distance between the VGG-16 feature maps () defines the perceptual loss.
The cross entropy loss coming from the classifier is added to a particular generator loss in the generator array specified by the input’s ground-truth label with a weighted factor controlling the impact of the classifier within the generator:
where is the batch size, the number of classes and is the element in the true label distribution of the image belonging to class . Likewise, depicts the classifier predictions for the transformed image with of belonging to the class .
In this manner, each generator in the array is able to learn class-inherent features from a specific class eventually, accentuating the differentiation among classes.
In inference time the general flowchart of the system is modified as shown in Figure 2. The final class prediction is given by concatenating each output distribution for every
in the logit domain, taking the arg max and computing the modulo with the amount of classesas indicated in Formula (7)
4 Experimental framework
To assess the capacity of FuCiTNet in increasing the discrimination among classes, we created three datasets from two different fields using object-classes that are frequently confused by the best performing classification model. In particular, from Tiny ImageNetle2015tiny , we created two datasets made of two and three classes respectively, cat-vs-dog and cat-vs-dog-vs-goldfish. From NWPU-RESISC45 remote sensing dataset made of aerial ortho-images Cheng_2017 , we created church-vs-palace dataset which represent one of the most similar/confusing pair of classes in this dataset. To further increase the complexity of this dataset, we downsampled the church and palace images to pixels. A brief description of the three datasets is provided in Table 1.
|Dataset||# classes||# pixels/image||Train||Test||Total|
The generator networks were initialized randomly while the classifier, ResNet-18, was initialized using the pretrained weights on ImageNet imageNet . As optimization algorithm, we used Adam adam in both, generator and classifier with . The adopted learning rates for generators and classifier are different, for the generators, we used a value of and
for ResNet-18. The reason why generators have a lower learning rate is because they need to be subtle when generating the transformed image and avoid the classifier from falling behind on capturing the rate of change in appearance. For the ResNet-18 we used learning rate decay of 0.1 each 5 epochs. We have also used weight decay ofto avoid overfitting. We applied early stopping monitoring based on the validation loss with a patience of 10 epochs.
The weighted factor in the generator loss in Eq. (3
) is a hyperparameter to be tuned. We evaluated a space of 13 values: [1, 0.5, 0.1, 0.075, 0.05, 0.025, 0.01, 0.0075, 0.005, 0.0025, 0.001, 0.00075, 0.0005]. We assessed the effect from a high contribution in the loss towards a softer impact. For each value ofthe system was trained throughout 100 epochs with a batch size of 32. We alternate updates between the generator and classifier, i.e., for each batch we update first the classifier then we update the generator. We evaluate FuCiTNet on the validation set after each epoch. The chosen weights for both networks are the ones which minimize the classification loss in this set.
For a fair comparison, we compare the results of FuCiTNet with the two most related approaches learn_dataAug and learn_dataAug_gans . As learn_dataAug_gans provides the source code, we analyze and show the results on the three considered datasets, cat-vs-dog, cat-vs-dog-vs-goldfish and church-vs-palace, following the same experimental protocol we used in the rest of experiments. However, as learn_dataAug does not provide the source code and considered only binary classes, we included only the results reported in the paper using the same experimental protocol on dog-vs-cat dataset. The results of this approach on cat-vs-dog-vs-goldfish and church-vs-palace are not available to us. Both approaches from learn_dataAug and learn_dataAug_gans used ResNet CNN architecture. In addition, we also compare our results to the results of the best classification model obtained based on the same network architecture and by manually selecting the set of optimizations that reaches the highest performance.
5 Results and analysis
This section presents, compares and analyzes the quantitative and qualitative results of FuCiTNet with the state-of-the-art methods, learn_dataAug and learn_dataAug_gans , and with the best classification model.
|Setup||Accuracy||Cat mean||Dog mean|
|Data aug, FT||0.888||—||—|
|FuCiTNet, Data aug,||0.883||—||—|
|FuCiTNet, Data aug, FT,||0.912||2.734||2.101|
|Setup||Accuracy||Church mean||Palace mean|
|No data aug, FT||0.769||—||—|
|Data aug, FT||0.779||—||—|
|FuCiTNet, Data aug,||0.777||—||—|
|FuCiTNet, Data aug, FT,||0.795||1.868||1.580|
|Setup||Accuracy||Cat mean||Dog mean||Goldfish mean|
|Data aug, FT||0.911||—||—||—|
|FuCiTNet, Data aug,||0.900||—||—||—|
|FuCiTNet, Data aug, FT,||0.920||3.484||3.248||6.613|
The performance, in terms of accuracy, of the best classification model with and without applying FuCiTNet, on the three datasets is presented in Tables 2, 6 and 10. Several configurations were analyzed, ’None’ indicates that the network was initialized with ImageNet weights and retrained on the dataset. ’Data aug’ indicates that the set of the optimal data augmentation techniques was applied. ’FT’, for Fine Tuning, indicates that only the last fully connected layer was retrained on the dataset and the remaining layers were frozen. If ’FT’ is not indicated, it means that the whole layers of the network were re-trained on the dataset. To compare FuCiTNet with the state-of-the-art, we also include the accuracy of the method proposed in
When applying FuCiTNet to the test images, the classification model reaches , and higher accuracy than the best classification model on cat-dog, church-vs-palace and cat-vs-dog-vs-goldfish respectively. The number of TP and TN also improves in all the three datasets as shown in the confusions matrices in Tables 5, 9 and 13.
With respect to the state-of-the-art, FuCiTNet provides 14%, 2% and 16.4% better accuracy than the approach proposed in learn_dataAug_gans on cat-dog, church-vs-palace and cat-vs-dog-vs-goldfish respectively and 18.44% better accuracy than the approach proposed by learn_dataAug on cat-vs-dog.
These results can explain that FuCiTNet makes the object-class in the input images more distinguishable to the model. Our transformation approach can be considered as an incremental optimization to the rest of optimizations space, since on all the datasets, the best results were obtained by combining FuCiTNet with fine-tuning and data-augmentation.
We have also analyzed the mean confidence of each class in the three problems such that:
where is the number of correctly classified test images, from the test split, with ground truth class. The mean confidence of the best reference model and FuCiTNet are shown in the third column of Tables 2, 6 and 10. As we can observe from these results FuCiTNet clearly improve the mean confidence of the model in all the classes in cat-vs-dog and church-vs-palace datasets. Although FuCiTNet have lowered the mean confidence in the cat-vs-dog-vs-goldfish dataset, the accuracy, number of true positives and true negatives have improved.
. It does not require neither paired datasets of high and low resolution input images nor super-resolution network. Unlikelearn_dataAug , in which the transformations were learnt regardless the class of the sample, our results demonstrate that exploiting class-inherent features improves significantly the borders between visually similar classes.
Figures 9, 16, 23, 30 and 39 show visually the transformations applied by FuCiTNet to different original input images from different classes. As it can be observed from these images, the transformations affect:
In some images in the church-vs-palace testset, the model does not add any transformation. This indicates that the classifier does not need any additional transformation to differentiate the object-class in those specific images. In general, in this dataset, most of the palaces have either a dome or a rectangular roof with an inner courtyard but generally the churches do not include any of these features. In the cases where a church has an inner courtyard, the model needs to add a transformation so that the classifier can distinguish it better from a church as it can be seen in Figure 30. Likewise, when the palace has a very distinguishable shape from a church, the model does not add any transformation as it can be seen in Figure 23(d), (e) and (f).
This effect does not occur in the cat-vs-dog and cat-vs-dog-vs-fish test sets, the background in these images is so diverse, it includes, people, furniture, occlusion etc., that the model always need some more transformation to differentiate better between the involved classes.
6 Conclusions and future work
Our aim in this work was reducing over-fitting in very small datasets in which the involved object-classes share many visual features. We presented FuCiTNet model in which a novel array of generators learn independently class-inherent transformations of each class of the problem. We introduced a classification loss in the generators to drive the learning of specific k-class transformations. The learnt transformations are then applied to the input test images to help the classifier distinguishing better between different classes.
Our experiments demonstrated that FuCiTNet increases the classification generalization capability on three small datasets with very similar classes. With the benchmark datasets we demonstrate that FuCiTNet behaves robustly in diverse-nature data and handles properly different view dimensions (zenital and frontal). We conclude that our method yields strong gains as an incremental optimization technique additional to the standards when searching for a better model performance.
As future work, we are planning to explore and adapt FuCiTNet to small medical datasets.
- (2) T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Dollár, C. L. Zitnick, Microsoft COCO: common objects in context (2014). arXiv:1405.0312.
- (3) A. Krizhevsky, Learning multiple layers of features from tiny images (2012).
A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: Advances in Neural Information Processing Systems 25, 2012, pp. 1097–1105.
- (5) P. Y. Simard, D. Steinkraus, J. C. Platt, Best practices for convolutional neural networks applied to visual document analysis, in: Seventh International Conference on Document Analysis and Recognition, 2003, pp. 958–963.
- (6) H. S. Baird, Document image defect models, in: Structured Document Image Analysis, 1992, pp. 546–556.
- (7) L. Perez, J. Wang, The effectiveness of data augmentation in image classification using deep learning (2017). arXiv:1712.04621.
- (8) X. Zhu, Y. Liu, Z. Qin, J. Li, Data augmentation in emotion classification using generative adversarial networks (2017). arXiv:1711.00648.
- (9) A. J. Ratner, H. R. Ehrenberg, Z. Hussain, J. Dunnmon, C. Ré, Learning to compose domain-specific transformations for data augmentation (2017). arXiv:1709.01643.
- (10) Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE 86 (11) (1998) 2278–2324.
N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting, Journal of Machine Learning Research 15 (56) (2014) 1929–1958.
- (12) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, in: Advances in Neural Information Processing Systems 27, 2014, pp. 2672–2680.
- (13) B. Zoph, E. D. Cubuk, G. Ghiasi, T. Lin, J. Shlens, Q. V. Le, Learning data augmentation strategies for object detection (2019). arXiv:1906.11172.
- (14) K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition (2015). arXiv:1512.03385.
- (15) S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift (2015). arXiv:1502.03167.
- (16) K. He, X. Zhang, S. Ren, J. Sun, Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (2015). arXiv:1502.01852.
- (18) C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, W. Shi, Photo-realistic single image super-resolution using a generative adversarial network (2016). arXiv:1609.04802.
- (19) K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition (2014). arXiv:1906.11172.
Y. Le, X. Yang, Tiny imagenet
visual recognition challenge (2015).
G. Cheng, J. Han, X. Lu, Remote sensing image scene classification: Benchmark and state of the art, Proceedings of the IEEE 105 (10) (2017) 1865–1883.
- (22) D. Kingma, J. Ba, Adam: A method for stochastic optimization (2015). arXiv:1412.6980.
- (23) A. Paszke, et al., Pytorch: An imperative style, high-performance deep learning library, in: Advances in Neural Information Processing Systems 32, 2019, pp. 8024–8035.