, natural language processing (NLP)[Torfi et al.2020] and so on. The development of these research field is mainly affected by the following three aspects: the progress of deep network architectures, great computing power, and access to big data.
Firstly, the scale of the network architectures is often proportional to its generalization ability, such as 152-layers ResNet [He et al.2016]
, which, compared with shallow network, can gain significant accuracy form the increased depth. Secondly, the development of computing power has a significant impact on deep learning. With stronger computing power, it is possible to design models with deeper architecture. Finally, sufficient open datasets like Imagenet[Russakovsky et al.2015]
, MS-COCO[Lin et al.2014]
and PASCAL VOC[Everingham et al.2015] are crucial to the development of deep learning models.
However, we observe some imbalance among the developments of these three perspectives. While various network architectures for different CV tasks have been proposed and the computation power of graphics processing unit (GPU) have rapidly been increasing, fewer attention has been paid to using data augmentation methods to generate qualified training data. The core idea of data augmentation is to improve the sufficiency and diversity of training data by generating synthetic dataset. The augmented data can be regarded as being extracted from a distribution that is close to the real one. Then, the augmented dataset can represent more comprehensive characteristics. But some research challenges remain in image data augmentation methods. First, image data augmentation techniques can be applied into various CV tasks, such as, object detection [Liu et al.2020], semantic segmentation [Minaee et al.2021] and image classification [Algan and Ulusoy2021]. But the challenge is that data augmentation methods are tasks-independent. Because the operations are performed on the image data and labels at the same time, and the label types are different under different tasks, the data augmentation methods for object detection task can not be directly applied to semantic segmentation task. This results in inefficiency and low scalability.
Second, there is no theoretical research on data augmentation. For example, there is no quantitative standard on the size of sufficient training datasets. The size of generated training data is usually designed according to personal experience and extensive experiments. In addition, paradox may exist when the size of the original dataset is so small. We will face the challenge of how to generate qualified data based on very little data.
To the best of our knowledge, works related to image data augmentation research did not review image data augmentation methods in terms of CV tasks. One work [Wang et al.2017] explores and compares multiple solutions to the problem of data augmentation in image classification, but it only relates to image classification task and experiments with only traditional transformations and GANs. [Wang et al.2020]
reviews existing face data augmentation works from perspectives of the transformation types and deep learning. However, the survey is aimed at the face recognition tasks only. One work mainly focuses on different data augmentation techniques based on data warping and oversampling[Khosla and Saini2020]. However, it does not provide a systematic review of different approaches. Another work closely related to ours is [Shorten and Khoshgoftaar2019] which present some existing methods and promising developments of data augmentation. However, it does not provide an evaluation on the effectiveness of data augmentation for various actual tasks and lacks some newly proposed methods, such as, CutMix [Yun et al.2019], AugMix [Hendrycks et al.2019], GridMask [Chen et al.2020], etc.
In this paper, we aim to fill the aforementioned gaps by summarizing existing novel image data augmentation methods. To this end, we propose a taxonomy of image data augmentation methods, as illustrated in Fig. 1. Based on this taxonomy, we systematically review the data augmentation techniques from the perspectives of common CV tasks, including object detection, semantic segmentation and image classification. Furthermore, we also conduct experiments from the perspectives of these three CV tasks. Based on experiment results, we compare the performance of different kinds of data augmentation methods and their combinations on various deep learning models with open image datasets. We will also discuss future directions for image data augmentation research.
The reminder of this paper is organized as follows. We present the basic data augmentation methods first, such as traditional image manipulation, image erasing based methods and image mix based methods. Then we discuss some advanced techniques, including auto augment based methods, feature augmentation techniques, and deep generative models. To evaluate the effect of various kinds of data augmentation methods, we conduct experiments in three typical CV tasks with various common public image datasets. Finally, we highlight some promising directions for future research.
2 Basic Data Augmentation Methods
2.1 Image Manipulation
Basic image manipulations are focusing on image transformations, such as rotation, flipping, and cropping, etc. Most of these techniques manipulate the images directly and are easy to implement. The methods considered are shown with a concise description in Table. 1.
|Flipping||Flip the image horizontally, vertically, or both.|
|Rotation||Rotate the image at an angle.|
|Scaling Ratio||Increase or reduce the image size.|
|Noise injection||Add noise into the image.|
|Color space||Change the image color channels.|
|Contrast||Change the image contrast.|
|Sharpening||Modify the image sharpness.|
|Translation||Move the image horizontally, vertically, or both.|
|Cropping||Crop a sub-region of the image.|
However, the drawbacks exist. First of all, it is meaningful to apply basic image manipulations only under the assumption that the existing data obeys the distribution close to the actual data distribution. Secondly, some basic image manipulation methods, such as translation and rotation, suffer from the padding effect. That is, after the operation, some areas of the images will be moved out of the boundary and lost. Therefore, some interpolation methods will be applied to fill in the blank part. Generally, the region outside the image boundary is assumed to be constant 0, which will be black after manipulation. Moreover, regardless of the CV task, the object of interest should not to be moved off the frame.
2.2 Image Erasing
Image augmentation approaches based on image erasing typically delete one or more sub-regions in the image. The main idea is to replace the pixel values of these sub-regions with constant values or random values.
, authors considered a simple regularization technique of randomly masking out square regions of input during training convolutional neural networks (CNNs), which is known as cutout. This method is capable of improving the robustness and overall performance of CNNs.[Singh et al.2018] proposed Hide-and-Seek (HaS) to hide patches in a training image randomly, which can force the network to seek other relevant content while the most discriminative content is hidden. [Zhong et al.2020] proposed random erasing, which selects a rectangle region in an image randomly and replaces its pixels with random values. This method is simple, but makes significant improvements. Recently, in [Chen et al.2020]
, authors analyzed the requirement of information dropping and then proposed a structured method, GridMask, which is also based on the deletion of regions in the input images. Unlike Cutout and HaS, GridMask neither removes a continuous region nor randomly selects squares, the deleted regions are a set of spatially uniformly distributed squares, which can be controlled in terms of density and size. Furthermore, to balance the object occlusion and information retention, FenceMask[Li et al.2020] was proposed, which is based on the simulation of object occlusion strategy.
2.3 Image Mix
Image mix data augmentation has received increasing attention in recent years. These methods are mainly completed by mixing two or more images or sub-regions of images into one.
In [Inoue2018], authors enlarge the dataset by synthesizing every new image with two images randomly selected in the training set, known as pairing samples. The synthesis method used is to average the intensity of two images on each pixel. [Zhang et al.2017] discusses a more general synthesis method, Mixup. Mixup, which is not just average the intensity of two images, conducts convex combinations sample pairs and their labels. Therefore, Mixup establishes a linear relationship between data augmentation and the supervision signal and can regularize the neural network to favor simple linear behavior in-between training samples. Similar with pairing samples and Mixup, [Yun et al.2019] proposes the CutMix. Instead of simply removing pixels or mixing images from training set. CutMix replaces the removed regions with a patch from another image and can generate more natural images compared to Mixup. [Harris et al.2020] proposes Fmix that uses random binary masks obtained by applying a threshold to low-frequency images sampled from Fourier space. Fmix can take on a wide range of shapes of random masks and can improve performance over Mixup and CutMix. Instead of mixing multiple samples, AugMix [Hendrycks et al.2019] first mixes multiple augmentation operations into three augmentation chains and then mixes together the results of several augmentation chains in convex combinations. Therefore, the whole process is typically mixing the results generated by the same image in different augmentation pipelines. In ManifoldMix [Verma et al.2019]
, authors improve the hidden representations and decision boundaries of neural networks at multiple layers by mixing hidden representations rather than input samples.
3 Advanced Approaches
3.1 Auto Augment
Instead of manually designing data augmentation methods, researchers try to automatically search augmentation approaches to obtain improved performance. Auto augment has been the frontier of deep learning research and been extensively studied. Auto augment is based on the fact that different data have different characteristics, so different data augmentation methods have different benefits. Automatic searching for augmentation methods can bring more benefits than manual design. [Cubuk et al.2019]
describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies. Specifically, AutoAugment consists of two parts: search algorithm and search space. The search algorithm is designed to find the best policy regarding highest validation accuracy. The search space contains many policies which details various augmentation operations and magnitudes with which the operations are applied. However, a key challenge of auto augmentation methods is to choose an effective augmentation policy from a large search space of candidate operations. The search algorithm usually uses Reinforcement Learning[Sutton and Barto2018], which brings high time cost. Therefore, to reduce the time cost of AutoAugment, [Lim et al.2019] proposes Fast AutoAugment that finds effective augmentation policies via a more efficient search strategy based on density matching. In comparison to AutoAugment, this method can speed up the search time. Meanwhile, [Ho et al.2019] proposes Population Based Augmentation (PBA) to reduce the time cost of AutoAugment which generates nonstationary augmentation policy schedules instead of a fixed augmentation policy. PBA can match the performance of AutoAugment on multiple datasets with less computation time. Recently, [Cubuk et al.2020] proposed RandAugment surpassing all previous automated augmentation techniques, including AutoAugment and PBA. RandAugment dramatically reduces the search space for data augmentation by removing a separate search, which is computationally expensive. In addition, RandAugment further improves the performance of AutoAugment and PBA.
However, data augmentation might introduce noisy augmented examples and bring negative influence on inference. Therefore, [Gong et al.2021] proposed KeepAugment to use the saliency map to detect important regions on the original images and then preserve these informative regions during augmentation. KeepAugment automatically improve the data augmentation schemes, such as AutoAugment. In [Tian et al.2020], authors observed that the augmentation operations in the later training period are more influential and proposed Augmentation-wise Weight Sharing strategy. Compared with AutoAugment, this work improves efficiency significantly and make it affordable to directly search on large scale datasets. Unlike auto-augmentation methods searching strategies in an offline manner, [Lin et al.2019]
formulates the augmentation policy as a parameterized probability distribution and the parameters can be optimized jointly with network parameters, known as OHL-Auto-Aug.
3.2 Feature Augmentation
Rather than conduct augmentation only in the input space, feature augmentation performs the transformation in a learned feature space. In [DeVries and Taylor2017a]
, authors claimed that when traversing along the manifold it is more likely to encounter realistic samples in feature space than compared to input space. Therefore, various augmentation methods by manipulating the vector representation of data within a learned feature space are investigated, which includes adding noise, nearest neighbor interpolation and extrapolation. Recently,[Kuo et al.2020] proposed FeatMatch which is a novel learned feature-based refinement and augmentation method to produce a varied set of complex transformations. Moreover, FeatMatch can utilize information from both within-class and across-class prototypical representations. More recently, authors in [Li et al.2021]
proposed an implicit data augmentation method, known as Moment Exchange, by encouraging the models to utilize the moment information of latent features. Specifically, the moments of the learned features of one training image are replaced by those of another.
3.3 Deep Generative Models
The ultimate goal of data augmentation is to draw samples from the distribution, which represent the generating mechanism of dataset. Hence, the data distribution we generate data from should not be different with the original one. This is the core idea of deep generative models. Among all the deep generative models methods, generative adversarial networks (GANs)[Goodfellow et al.2014] are very representative methods. On the one hand, the generator can help generate new images. On the other hand, the discriminator ensures that the gap between the newly generated images and the original images is not too large. Although GAN has indeed been a powerful technique to perform unsupervised generation to augment data [Perez and Wang2017], how to generate high-quality data and evaluate them still remains a challenging problems. In this subsection, we would like to introduce some image data augmentation techniques based on GAN.
, authors proposed Pix2Pix to learn the mapping from the input images to output images. However, to train Pix2Pix, a large amount of paired data is needed. It is challenging to collect the paired data. Therefore, in[Zhu et al.2017], unlike Pix2Pix, a CycleGAN model is proposed to learn the translation on an image from the source domain to a target domain in the absence of paired samples. As the number of source and target domains increases, CycleGAN has to train models for each paired domain separately. For instance, if the task is to do transformation among domains, we need to train models between every two domains. To deal with this issue, [Choi et al.2018] proposed StarGAN to improve the scalability and robustness in handling more than two domains. Generally, StarGAN builds only one model to perform image-to-image translation among multiply domains. In the generation phase, we just need to provide the generator with the source image and an attribute label which indicates the target domain. However, StarGAN takes the domain label as an additional input and learns a deterministic mapping per each domain, which may result in the same output per each domain given an input image. To address this problem, [Choi et al.2020] proposed StarGAN , which is a scalable approach that can generate diverse images across multiple domains. In this work, researchers define the domain and style of images as visually distinct category groups and the specific appearance of each image, respectively. For example, the dog can be used as a domain, but there are many kinds of dogs, such as Labrador and Husky. Therefore, the specific dog breed can be viewed as the style of an image. In this way, StarGAN can translate an image of one domain to diverse images of a target domain, and support multiple domains.
In this section, based on our taxonomy, we conduct extensive evaluations in three typical CV tasks, semantic segmentation, image classification, and object detection, to show the effectiveness of data augmentation in improving performance. To show fairness, we use the most and commonly used public datasets for this task.
4.1 Semantic Segmentation
In this subsection, we conduct experiments of semantic segmentation on PASCAL VOC dataset. In table 2, we report the performance improvement on Intersection over Union(IoU) metric with several semantic segmentation models: deeplabv3+ [Liu et al.2021b], PSPNet [Zhao et al.2017], GCNet [Cao et al.2019], and ISANet [Huang et al.2019].
In our experiment, we apply different data augmentation methods based on our taxonomy in Fig 1. Specifically, the applied image manipulation methods include flipping, scaling ratio, rotation, noise injection, cropping, translation and sharpening. The applied image erasing methods include random erasing, GridMask, FenceMask, Cutout, and HaS. The applied image mix methods include mosaic, Mixup, CutMix, and Fmix. Table 2 presents the mean IoU with and without data augmentation on semantic segmentation models. We observe that data augmentation methods bring IoU improvements for all models.
4.2 Image Classification
In this experiment, we compare the classification accuracy with and without augmentation on several widely used image classification techniques, including Wide-ResNet [Zagoruyko and Komodakis2016], DenseNet [Huang et al.2017], and Shake ResNet [Gastaldi2017]
. These models are evaluated on several public image classification datasets, including CIFAR-10[Krizhevsky and Hinton2010], CIFAR-100 [Krizhevsky et al.]
and SVHN[Netzer et al.2011]. Moreover, the data augmentation methods applied are the same with those in 4.1, which includes several image manipulation methods, image erasing methods and image mix methods.
Table 3 summarizes the image classification results with and without data augmentation. It can be observed that data augmentation leads to average accuracy improvement(AAI).
4.3 Object Detection
In this subsection, we compare the effectiveness of various image data augmentation methods on widely used COCO2017 dataset, which is usually used for object detection task. We demonstrate the experiment results with and without data augmentation in two popular object detection deep models, FasterRCNN [Ren et al.2016] and CenterNet [Duan et al.2019]. We consider the data augmentation methods used are the same with those in 4.1, which includes several image manipulation methods, image erasing methods and image mix methods. In table 4, we report the performance improvement on mean average precision (mAP), AP50 and AP75 and we summarize these metrics for all methods in average sense. We observe that the data augmentation methods bring promising performance improvements.
5 Discussion for Future Directions
Despite extensive efforts on image data augmentation research to bring performance improvement on deep learning models, several open problems remain yet to completely to solve, which are summarized as follows.
Theoretical Research on Data Augmentation. There is a lack of theoretical research on data augmentation. Data augmentation is more regarded as an auxiliary tool to improve the performance. Specifically, some methods can improve the accuracy, but we do not fully understand the reasons behind, such as pairing samples and mixup. To human eyes, the augmented data with pairing samples and mixup are visually meaningless. Furthermore, there is no theory on the size of sufficient training datasets. The size of the dataset suitable for tasks and models is usually designed based on personal experience and through extensive experiments. For example, researchers determine the size of datasets according to the specific models, training objectives, and the difficulty of data collection. Rigorous and thorough interpretability can not only explain why some augmentation techniques are useful but also can help guide the process of choosing or designing the most applicable and effective methods to enlarge our datasets. Thus, a critical future perspective is to develop theoretical support for data augmentation.
The Evaluation of Data Augmentation Methods. The quantity and diversity of training data are of great importance to model’s generalization ability. However, since there is no unified metrics, how to evaluate the synthesized image quality is an open problem [Salimans et al.2016]
. At this stage, researchers evaluate the quality of the synthetic data in several following ways. First, the synthetic data are usually evaluated by human eyes, which is time-consuming, labor-intensive, and subjective. Amazon Mechanical Turk (AMT) is often used to evaluate the realism of outputs. AMT evaluates the quality and authenticity of the generated images by asking participants to vote for various images synthesized with different methods. Second, some studies combine the evaluation with specific tasks, which is to evaluate data augmentation methods according to their effect on the tasks metrics with and without data augmentation, such as classification tasks with classification accuracy and semantic segmentation with IOU of masks. However, there is no evaluation index only for the synthetic data itself. Generally, the evaluation metrics are based on diversity of individual data and consistency of overall data distribution regardless of what task is. Data quality analysis can help design evaluation metrics .
class imbalance or very few data can severely skew the data distribution[Sun et al.2009]. This situation occurs since the learning process is often biased toward the majority class examples, so that minority ones are not well modeled. Synthetic minority of oversampling technique (SMOTE) [Chawla et al.2002] is to oversample the minority class. However, the oversampling is repeating drawing from the current data set. This may saturate the minority class and cause overfitting. Ultimately, we expect generated data can simulate distribution similar with training data while diversity never losses.
The Number of Generated Data. An interesting point for data augmentation is that the increase in the amount of training data is not exactly proportional to the increase in the performance. When a certain amount of data is reached, continue to increase the data without improving the effect. This may partly because, despite the increase in the number of data, the diversity of data remains unchanged. Thus, how much data should be generated is good enough to improve the model performance remains to be further explored.
The Selection and Combination of Data Augmentation. Since various data augmentation can be combined together to generate new image data, the selection and combination of data augmentation techniques are critical. Image recognition experiment shows that results from [Pawara et al.2017] combined methods are often better than single method. Therefore, how to choose and combine methods is a key point when performing data augmentation. However, from our evaluation, the methods applicable for different datasets and tasks are not the same. Therefore, the set of augmentation methods must be carefully designed, implemented, and tested for every new task and dataset.
With the development of deep learning, the requirements for training datasets are becoming increasingly stringent. Thus we emphasize that data augmentation is an effective solution for the shortage of limited labeled image data. In this paper, we present a comprehensive review on image data augmentation methods in various CV tasks. We propose a taxonomy, summarizing representative approaches in each category. We then compare the methods empirically in various CV tasks. Finally, we discuss the challenges and highlight future perspectives.
- [Algan and Ulusoy2021] Görkem Algan and Ilkay Ulusoy. Image classification with deep learning in the presence of noisy labels: A survey. Knowledge-Based Systems, 215:106771, 2021.
- [Cao et al.2019] Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, and Han Hu. Gcnet: Non-local networks meet squeeze-excitation networks and beyond. arXiv preprint arXiv:1904.11492, 2019.
- [Chawla et al.2002] Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16:321–357, 2002.
- [Chen et al.2020] Pengguang Chen, Shu Liu, Hengshuang Zhao, and Jiaya Jia. Gridmask data augmentation. arXiv preprint arXiv:2001.04086, 2020.
[Choi et al.2018]
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul
Stargan: Unified generative adversarial networks for multi-domain
Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 8789–8797, 2018.
- [Choi et al.2020] Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 8185–8194. IEEE, 2020.
- [Cubuk et al.2019] Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 113–123, 2019.
- [Cubuk et al.2020] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 702–703, 2020.
- [DeVries and Taylor2017a] Terrance DeVries and Graham W Taylor. Dataset augmentation in feature space. arXiv preprint arXiv:1702.05538, 2017.
- [DeVries and Taylor2017b] Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
- [Duan et al.2019] Kaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qingming Huang, and Qi Tian. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6569–6578, 2019.
- [Everingham et al.2015] Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John M. Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. Int. J. Comput. Vis., 111(1):98–136, Jan 2015.
- [Gastaldi2017] Xavier Gastaldi. Shake-shake regularization. arXiv preprint arXiv:1705.07485, 2017.
- [Gong et al.2021] Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, and Qiang Liu. Keepaugment: A simple information-preserving data augmentation approach. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1055–1064, 2021.
- [Goodfellow et al.2014] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proc. NIPS, pages 2672–2680, 2014.
- [Harris et al.2020] Ethan Harris, Antonia Marcu, Matthew Painter, Mahesan Niranjan, Adam Prügel-Bennett, and Jonathon Hare. Fmix: Enhancing mixed sample data augmentation. arXiv preprint arXiv:2002.12047, 2020.
- [Hassaballah and Awad2020] Mahmoud Hassaballah and Ali Ismail Awad. Deep learning in computer vision: principles and applications. CRC Press, 2020.
- [He et al.2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 770–778, 2016.
- [Hendrycks et al.2019] Dan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. arXiv preprint arXiv:1912.02781, 2019.
[Ho et al.2019]
Daniel Ho, Eric Liang, Xi Chen, Ion Stoica, and Pieter Abbeel.
Population based augmentation: Efficient learning of augmentation
International Conference on Machine Learning, pages 2731–2741. PMLR, 2019.
- [Huang et al.2017] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
- [Huang et al.2019] Lang Huang, Yuhui Yuan, Jianyuan Guo, Chao Zhang, Xilin Chen, and Jingdong Wang. Interlaced sparse self-attention for semantic segmentation. arXiv preprint arXiv:1907.12273, 2019.
- [Inoue2018] Hiroshi Inoue. Data augmentation by pairing samples for images classification. CoRR, abs/1801.02929, 2018.
- [Isola et al.2017] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 1125–1134, 2017.
- [Khosla and Saini2020] Cherry Khosla and Baljit Singh Saini. Enhancing performance of deep learning models with different data augmentation techniques: A survey. In 2020 International Conference on Intelligent Engineering and Management (ICIEM), pages 79–85, 2020.
[Krizhevsky and Hinton2010]
Alex Krizhevsky and Geoff Hinton.
Convolutional deep belief networks on cifar-10.Unpublished manuscript, 40(7):1–9, 2010.
- [Krizhevsky et al.] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-100 (canadian institute for advanced research).
[Kuo et al.2020]
Chia-Wen Kuo, Chih-Yao Ma, Jia-Bin Huang, and Zsolt Kira.
Featmatch: Feature-based augmentation for semi-supervised learning.In European Conference on Computer Vision, pages 479–495. Springer, 2020.
- [Li et al.2020] Pu Li, Xiangyang Li, and Xiang Long. Fencemask: A data augmentation approach for pre-extracted image features. arXiv preprint arXiv:2006.07877, 2020.
- [Li et al.2021] Boyi Li, Felix Wu, Ser-Nam Lim, Serge Belongie, and Kilian Q. Weinberger. On feature normalization and data augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12383–12392, June 2021.
- [Lim et al.2019] Sungbin Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, and Sungwoong Kim. Fast autoaugment. Advances in Neural Information Processing Systems, 32:6665–6675, 2019.
- [Lin et al.2014] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: common objects in context. In Proc. ECCV, volume 8693 of Lecture Notes in Computer Science, pages 740–755. Springer, 2014.
- [Lin et al.2019] Chen Lin, Minghao Guo, Chuming Li, Xin Yuan, Wei Wu, Junjie Yan, Dahua Lin, and Wanli Ouyang. Online hyper-parameter learning for auto-augmentation strategy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6579–6588, 2019.
- [Liu et al.2020] Li Liu, Wanli Ouyang, Xiaogang Wang, Paul Fieguth, Jie Chen, Xinwang Liu, and Matti Pietikäinen. Deep learning for generic object detection: A survey. International journal of computer vision, 128(2):261–318, 2020.
- [Liu et al.2021a] Baichuan Liu, Qingtao Zeng, Likun Lu, Yeli Li, and Fucheng You. A survey of recommendation systems based on deep learning. Journal of Physics: Conference Series, 1754(1):012148, feb 2021.
- [Liu et al.2021b] Baichuan Liu, Qingtao Zeng, Likun Lu, Yeli Li, and Fucheng You. A survey of recommendation systems based on deep learning. Journal of Physics: Conference Series, 1754(1):012148, Feb 2021.
- [Minaee et al.2021] Shervin Minaee, Yuri Y Boykov, Fatih Porikli, Antonio J Plaza, Nasser Kehtarnavaz, and Demetri Terzopoulos. Image segmentation using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
- [Mirza and Osindero2014] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014.
- [Netzer et al.2011] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.
- [Pawara et al.2017] Pornntiwa Pawara, Emmanuel Okafor, Lambert Schomaker, and Marco Wiering. Data augmentation for plant classification. In Proc. International Conference on Advanced Concepts for Intelligent Vision Systems, pages 615–626. Springer, 2017.
- [Perez and Wang2017] Luis Perez and Jason Wang. The effectiveness of data augmentation in image classification using deep learning. CoRR, abs/1712.04621, 2017.
- [Ren et al.2016] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell., 39(6):1137–1149, Jan 2016.
- [Russakovsky et al.2015] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis., 115(3):211–252, 2015.
- [Salimans et al.2016] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Proc. NIPS, pages 2234–2242, 2016.
- [Shorten and Khoshgoftaar2019] Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data, 6(1):1–48, 2019.
- [Singh et al.2018] Krishna Kumar Singh, Hao Yu, Aron Sarmasi, Gautam Pradeep, and Yong Jae Lee. Hide-and-seek: A data augmentation technique for weakly-supervised localization and beyond. arXiv preprint arXiv:1811.02545, 2018.
- [Sun et al.2009] Yanmin Sun, Andrew KC Wong, and Mohamed S Kamel. Classification of imbalanced data: A review. International journal of pattern recognition and artificial intelligence, 23(04):687–719, 2009.
- [Sutton and Barto2018] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
- [Tian et al.2020] Keyu Tian, Chen Lin, Ming Sun, Luping Zhou, Junjie Yan, and Wanli Ouyang. Improving auto-augment via augmentation-wise weight sharing. arXiv preprint arXiv:2009.14737, 2020.
- [Torfi et al.2020] Amirsina Torfi, Rouzbeh A Shirvani, Yaser Keneshloo, Nader Tavaf, and Edward A Fox. Natural language processing advancements by deep learning: A survey. arXiv preprint arXiv:2003.01200, 2020.
- [Verma et al.2019] Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In International Conference on Machine Learning, pages 6438–6447. PMLR, 2019.
- [Wang et al.2017] Jason Wang, Luis Perez, et al. The effectiveness of data augmentation in image classification using deep learning. Convolutional Neural Networks Vis. Recognit, 11:1–8, 2017.
- [Wang et al.2020] Xiang Wang, Kai Wang, and Shiguo Lian. A survey on face data augmentation for the training of deep neural networks. Neural computing and applications, pages 1–29, 2020.
[Yun et al.2019]
Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and
Cutmix: Regularization strategy to train strong classifiers with localizable features.In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6023–6032, 2019.
- [Zagoruyko and Komodakis2016] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
- [Zhang et al.2017] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
- [Zhao et al.2017] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017.
- [Zhong et al.2020] Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In Proc. The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI, pages 13001–13008. AAAI Press, 2020.
- [Zhu et al.2017] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 2223–2232, 2017.