Although deep neural network based models have achieved remarkable successes in various computer vision tasks(Krizhevsky et al., 2017; Simonyan & Zisserman, 2014; Russakovsky et al., 2015; He et al., 2016), vast amounts of annotated training data are usually required for a superior performance in many visual tasks. For the object classification task, the requirement for a large training set could be partially explained by the fact that many latent variables (e.g., positions/postures of the objects, the brightness/contrast of the image, and the illumination conditions) can cause significant changes in the appearance of objects. Although collecting a large training set to cover all possible values of these latent variables could improve the recognition performance, for rare latent values such as extreme illumination conditions it could be prohibitively time-consuming and expensive to collect enough training images.
In this paper we restrict our attention to illumination conditions. For many real-world computer vision applications (e.g., autonomous driving and video surveillance) it is essential to recognize the objects under extreme illumination conditions such as backlighting, overexposure and other complicated cast shadows. Thus, we reckon it is desirable to improve recognition models’ generalization ability under different illumination conditions in order to deploy robust models in real-world applications.
We propose a novel neural network architecture named Separating-Illumination Network (Sill-Net) to deal with such problems. The key idea of our approach is to separate the illumination features from the semantic features in images, and then augment the separated illumination features onto other training samples (hereinafter we name these samples as “support samples”) to construct a more extensive feature set for subsequent training (see Figure 1). Specifically, our approach consists of three steps. In the first step, we separate the illumination and semantic features for all images in the existing dataset via a disentanglement method, and use the separated illumination features to build an illumination repository. Then, we transplant the illumination repository to the support samples to construct an augmented training set and use it to train a recognition model. Finally, test images are fed into the trained model for classification. Our proposed approach could improve the robustness to illumination conditions since the support samples used for training are blended with many different illumination features. Thus after training, the obtained model would naturally generalize better under various illumination conditions.
Our contributions are summarized as follows:
1) We develop an algorithm to separate the illumination features from semantic features for natural images. The separated illumination features can be used to construct an illumination feature repository.
2) We propose an augmentation method to blend support samples with the illumination feature repository, which could effortlessly enhance the illumination variety of the training set and thus improve the robustness to illumination conditions of the trained deep model.
3) We evaluate Sill-Net on several object classification benchmarks, i.e., two traffic datasets (GTSRB and TT100K) and three logo datasets (Belgalogos, FlickrLogos32, and TopLogo-10). Sill-Net outperforms the state-of-the-art (SOTA) methods by a large margin.
2 Proposed Method
In this section, we introduce our Separating-Illumination Network (Sill-Net). Sill-Net first learns to separate the semantic and illumination features of training images. Then the illumination features are blended with the semantic feature of each support sample to construct an augmented feature set. Finally, we train again on the illumination-augmented feature set for classification. The architecture of our method is illustrated in Figure 2.
Sill-Net mainly consists of the following modules: the separation module, the matching and reconstruction module, and the augmentation module. In detail, we implement the method in three steps:
1) The separation module is trained to separate the features into semantic parts and illumination parts for all training images. The matching and reconstruction module promotes the learning of better semantic feature representation. The learned illumination features are stored into an illumination repository. The details are illustrated in Section 2.1.
2) The semantic feature of each support image is combined with all illumination features in the repository to build an augmented feature set to train the classifier. The augmentation module is illustrated in Section2.2.
3) Test images are input into the well-trained model to be predicted in an end-to-end manner, referring to Section 2.3.
This approach assumes that the illumination distribution learned from training data is similar to that of test data. Thus the illumination features can be used as feature augmentation for sufficient training.
We can choose different support samples in different visual tasks. For instance, in conventional classification tasks, we use the real training images as support samples; in one-shot classification tasks, we construct the support set with template images (i.e., graphic symbols visually and abstractly representing semantic information).
2.1 Separate semantic and illumination features
Let represent the labeled dataset of training classes with images, where denotes the -th training image, is the one-hot label, and denotes the corresponding template image (or any image of the object without much deformation).
A feature extractor denoted by learns the separated features from images , where can be split along channels: . Here, is called the semantic feature, which represents the consistent information of the same category, while is called the illumination feature.
Here we specify what illumination represents in our paper. The narrow meaning of illumination is one of the environmental impacts which cause the appearance changes but no label changes. We call the features related to all the environmental impacts but are not category-specific as illumination features. Technically, we divide the object feature into different channels, one half determining the category label defined as the semantic feature, and the other half unrelated to the category label defined as the illumination feature. Thus, the following three conditions should be satisfied:
1) The semantic feature is informative to reconstruct the corresponding template image.
2) The semantic feature can predict the label while the illumination feature can not.
3) The illumination feature should not contain the semantic information.
To satisfy the above conditions, we build the following modules.
Matching and reconstruction module. We first construct a matching module (as shown in Figure 2) to make the semantic feature informative as required by the first condition. Since we design the extractor without downsampling operations to maintain the spatial information of the object, the semantic feature of one real image should be similar to that of its corresponding template image. However, the real image is usually deformed compared to the regular template image. Therefore, we use a spatial transformer (Jaderberg et al., 2015) to correct the deformation. We constrain the transformed semantic feature to be consistent with the template feature by the mean square error (MSE), that is:
Besides, we design a reconstructor (as shown in Figure 2) to retrieve the template image from the semantic feature to ensure that it is informative enough. We constrain the reconstructed template image by binary cross-entropy (BCE) loss:
where represents the -th pixel of the -th template image . Since the template images are composed of primary colors within the range of , binary cross-entropy (BCE) loss is sufficiently efficient for the retrieval (Kim et al., 2019). So far, the semantic feature is constrained to be consistent with the template feature and informative enough to be reconstructed to its template image.
Exchange mechanism. To ensure that the semantic feature can predict the label while the illumination feature can not, we utilize a feature exchange mechanism enlightened by (Xiao et al., 2018) to separate the feature. As shown in Figure 3, the semantic feature of one image is blended with the illumination feature of another image to form a new one through feature mixup (Zhang et al., 2017):
where the proportion . As required by the condition, the blended feature retains the same label as the semantic feature. Hence through training, the semantic feature would learn information to predict the label while the illumination feature would not.
We implement the exchange process for random pairs of images, building a new exchanged feature set:
The mixed features are then input into a classifier for label prediction. We denote the distribution of the predicted label given the mixed feature by . Then we minimize the cross-entropy loss:
where denotes the number of recombined features in the augmented feature set, represents the class number of all images for training and test, and is the -th element of the one-hot label .
The semantic feature retains the information to predict the label after training on the exchanged feature set. Besides, the semantic information in the illumination feature would be reduced, because otherwise, it would impair the predicion when it is blended with the other semantic features.
Constraints on illumination features. As required by the third condition, it is essential to impose additional constraints on illumination features to reduce the semantic information. However, it is difficult to find suitable restrictions since the generally used datasets have no illumination labels.
Enlightened by the disentanglement metric proposed in (Suter et al., 2019), we design a constraint on illumination features by negative Post Interventional Disagreement (PIDA). Given a subset including images of the same label , we write the loss as follows:
here, is a proper distance function (e.g., -norm), is the illumination feature of image with the same label , is the expectation, and is the number of images in class .
According to Eq. (6), PIDA quantifies the distances between the illumination feature of each same-labeled image and their expectation when the illumination conditions are changed. In the subset , the semantic information of each image is similar while the illumination information is different. Suppose an undesirable situation that the illumination features capture much semantic information rather than illumination information. The expectation would strengthen the common semantic component and weaken the distinct illumination components, and thus PIDA would be small. It means that the smaller PIDA is, the more semantic information the illumination feature captures compared to illumination information. By maximizing PIDA (i.e., minimizing ), we can effectively reduce the common semantic information remaining in the illumination features.
In summary, the overall loss function in the training phase can be written as:
Through the above training, the model learns to split the features into semantic and illumination features.
2.2 Augment samples by illumination repository
After the first training step, the illumination feature of each image can be separated. These features are collected to construct an illumination repository, expressed as follows:
We then use the illumination features to augment the support samples by a multiple of the repository size . Consider with images of label , here we assume that the template images constitute the support set. We combine all illumination features in the repository with the semantic feature of each template by feature mixup, building an augmented feature set as follows:
We train the model again on the feature set . So, if a few support samples are provided, the model can be trained on the augmented feature set blended with real illumination features, making it generalizable to test data. The classification loss of augmented training is expressed as follows:
where denotes the number of all recombined features in the augmented feature set.
Now, the model has been trained to be generalizable for test.
The feature extractor and classifier have been fully trained after the first two phases. Given the -th test image, the feature extractor firstly splits the semantic and illumination feature. Subsequently, the features are blended, and then the classifier outputs the category label , formulated as:
The inference is achieved in an end-to-end manner.
3.1 Datasets and experimental settings
Datasets. We validate the effectiveness of our method on two traffic sign datasets, GTSRB (Stallkamp et al., 2012) and Tsinghua-Tencent 100K (TT100K) (Zhu et al., 2016), and three logo datasets, BelgaLogos (Joly & Buisson, 2009; Letessier et al., 2012), FlirckrLogos-32 (Romberg et al., 2011) and TopLogo-10 (Su et al., 2017), because these datasets contain different illumination. Table 1 shows the size and number of classes of each dataset (we use the dataset provided in (Kim et al., 2019)). More details about the datasets are described in the supplementary material.
Evaluation tasks. Generally, we evaluate our model by the following steps. 1) Utilize the training dataset (or subset) to separate out the illumination features. 2) The support samples are augmented with the illumination features to form an augmented feature set. 3) Train a classifier on the augmented feature set. 4) Prediction on the test dataset.
We validate our model on the following classification tasks.
1) One-shot classification
In this type of task, the training phase requires no real images of the test classes but one template image for each category. This task is similar to the one-shot classification.
We set up two scenarios for traffic sign classification. In the first scenario, we split GTSRB into a training subset with 22 classes and a test subset with the other 21 classes, where the template images constitute the support set. In the second scenario, we train on GTSRB and test on TT100K for cross-dataset evaluation. We exclude four common classes shared by GTSRB and TT100K in the test set. For convenience, we denote the first scenario by GTSRBGTSRB and the second by GTSRBTT100K, where the training set is on the left side of the arrow while the test set is on the right.
For logo classification, we use the largest BelgaLogos as a training set and the remaining two as test sets respectively, denoted by BelgaFlickr32 and BelgaToplogos. Same as above, we remove four common classes in FlickrLogos-32 and five in Toplogo-10 shared by BelgaLogos.
2) Cross-domain one-shot classification
To further validate the generalization of our method, we perform a cross-domain one-shot evaluation by another two experiments, where the model is trained on traffic sign datasets and tested on logo datasets. Specifically, we train the model on GTSRB and test on FlickrLogos-32 and Toplogo-10. We denote these two scenarios as GTSRBFlickr32 and GTSRBToplogos. The setup is more challenging compared to the previous scenarios, since we train the model in the domain of traffic sign datasets while we test the model in an entirely different domain of logo datasets.
Architecture and parameter settings. We construct the extractor by six convolution layers to separate the semantic and illumination features (see the separation module in Figure 2). The reconstructor is built with layers of the extractor in an inversed order. The classifiers (see Figure 2 and Figure 3) are built with six convolution layers and three pooling layers. Due to space limitations, more details of the architecture are described in the supplementary material.
The networks are trained using the ADAM optimizer with learning rate , and . The mixup proportion is set to throughout the experiments. Limited by graphics card memory, we choose the mini-batch size of 16, which can be larger if conditions permit.
The matching and reconstruction loss functions are weighted by proportionality coefficients for optimal results. The weighted overall loss function is expressed as follows:
We can choose and in the range of . When they are too large, the model tends to learn false features with values close to zero. While when they are too small, the model is not able to learn informative semantic features. In our method, is set as and is set as .
Template image processing. Previous studies (Tabelini et al., 2020) have shown that basic image processing on template images (as support samples) helps the network’s generalization. In our experiment, we diversify the template images themselves using the following methods: geometric transformations, image enhancement (including brightness, color, contrast and sharpness adjustment), and blur. The template images are diversified and thus allow the model to learn more generalizable features. We observe that basic processing on template images improves model performance.
3.2 One-shot classification
We compare our method with Quadruplet networks (Kim et al., 2017) (QuadNet) and Variational Prototyping-Encoder (Kim et al., 2019) (VPE) for one-shot classification, reported in Table 2 and 3. We quote accuracies of the compared methods under their optimal settings, that is, VPE is implemented with augmentation and spatial transformer (VPE+aug+stn version) and QuadNet is implemented without augmentation. As shown in the tables, our method outperforms comparative methods in all scenarios.
|No. support set||(22+21)-way||36-way|
|Sill-Net w/o aug||46.25||45.94|
In traffic sign classification, Sill-Net outperforms the second best method VPE by a large margin of 13.81 (accuracy improved from 83.79 to 97.60) and 23.79 (accuracy improved from 71.80 to 95.59) respectively in two scenarios (see Table 2). It indicates that training on the features augmented by illumination information does help the real-world classification, even though only one template image is provided. It is notable that in the cross-dataset scenario GTSRBTT100K, Sill-Net achieves a comparable performance to the intra-dataset scenario GTSRBGTSRB, while VPE performs much worse in the cross-dataset scenario. We surmise it is because VPE learns latent embeddings generalizable to test classes in the same domain (GTSRB), but the generalization might be discounted when the target domain is slightly shifted (from GTSRB to TT100K). It is observed that the illumination conditions in GTSRB are quite similar to that in TT100K, therefore Sill-Net shows better generalization performance by making full use of the illumination information in GTSRB.
|No. support set||32-way||11-way|
|Sill-Net w/o aug||52.38||47.95|
In logo classification, Sill-Net improves the performance by 11.68 (from 53.53 to 65.21) and 26.68 (from 57.75 to 84.43) compared to VPE, respectively in two scenarios (see Table 3). The improvement of accuracies in logo classification is not comparable to that in traffic classification, which might be due to the undesirable quality of the training logo dataset. The GTSRB is the largest dataset with various illumination conditions. And the traffic signs are always complete and well localized in the images so that illumination features can be separated more easily. In contrast, the separation is harder for logo dataset due to incomplete logos, color variations, and non-rigid deformation (e.g., logos on the bottles).
We compare our method to the ordinary convolutional model consisting of a feature extractor and a classifier without feature augmentation, denoted as Sill-Net w/o aug (see the last row of Tables 2 and 3). For a fair comparison, the feature extractor and classifier share the same number of convolutional layers with Sill-Net. We train it on a synthetic dataset composed of the template images after basic processing (i.e., geometric transformations, image enhancement, and blur). The number of training samples is set to be the same as other methods. The results are reported as a reference to show how the Sill-Net performs without illumination feature augmentation. The unsatisfactory results show that the illumination feature augmentation does enhance the recognition ability of the model in one-shot classification.
3.3 Cross-domain one-shot classification
Sill-Net achieves the best results among all methods in cross-domain one-shot classification tasks, as shown in Table 4. It outperforms VPE by a large margin of 23.63 (69.75 compared to 46.12) in GTSRBFlickr32 and 39.86 (69.46 compared to 29.60) in GTSRBToplogos.
|No. support set||32-way||11-way|
|Sill-Net w/o aug||53.94||47.54|
The results illustrate that our method is still generalizable when the domain is transferred from traffic signs to logos. The unsatisfactory results of VPE are predictable. VPE learns a generalizable similarity embedding space of the semantic information among the same or similar domain (i.e., from traffic signs to traffic signs or from logos to logos). However, the embeddings learned from traffic signs are difficult to generalize to logos. In contrast, our method learns well-separated semantic and illumination representations and augments the illumination features to the template images from novel domains to generalize the model.
3.4 Ablation study
In this section, we delve into the contribution of each component of our method. The components under evaluation include the exchange mechanism, the matching and reconstruction module, the illumination constraint, and template image processing, as shown in Table 5. We disable one component at a time and then record the performance to assess its importance. The experiments are implemented in the one-shot classification scenario GTSRBGTSRB.
The results demonstrate that the exchange mechanism and matching module are the core components of our method. The accuracy of the model drops to 48.10 without exchange mechanism. It is because that the semantic and illumination features cannot be well separated without the exchange mechanism. The remaining semantic information in the illumination features is useless, or even would interfere with the recognition when they are combined with the semantic features of other objects during feature augmentation, hurting the performance of the model.
Meanwhile, the matching module cooperating with the separation module can further separate the semantic and illumination features. The matching module corrects the deformation of the object features. It retains the concrete semantic information (e.g., the outline of the object and semantic details of the object contents) with the supervision of template images. Without the matching module, the semantic features would not be informative enough, so that the separation module would have difficulty to separate the illumination features from the semantic features. Therefore, the accuracy of the model drops to 54.27 when the matching module is removed.
|w/o exchange mechanism||48.10 (-49.50)|
|w/o matching module||54.27 (-43.33)|
|w/o reconstruction module||80.74 (-16.86)|
|w/o illumination constraint||90.73 (-6.87)|
|w/o template processing||80.19 (-17.41)|
The accuracy of the model decreases by 16.86 without the reconstruction module. The reconstruction module also strives to make semantic features more informative. The matching module helps the model capture some level of the concrete semantic information, while the reconstruction module prompts to retain more delicate details of the object.
The illumination constraint increases the model performance by 6.87. Intuitively, the constraint reduces the semantic information in illumination features and thus enhances their quality. Higher-quality illumination representation can improve our feature augmentation method’s effectiveness, which is consistent with the results.
Furthermore, template image processing improves model performance as expected. The processing methods (i.e., geometric transformations, image enhancement, and blur as introduced before) diversify the template images so that the trained model is more generalizable. Under the combined effect of the proposed illumination augmentation in the feature space and the variation of template images, the full model achieves the best results among the existing methods.
3.5 Visualization of features
Figure 4 shows the separated semantic and illumination features of the images from training and test classes in GTSRB, visualized in the third and fourth lines. Note that the training and test datasets share no common classes. As shown in the figure, the semantic features delicately retain information consistent with the template images for both training and test classes. It is due to three aspects. First, the extractor maintains the size and spatial information of the features. Second, although objects in the input images vary in size and position, the features are corrected to the normal situation corresponding to the template images via the spatial transformer in the matching module. Third, the reconstruction module promotes the semantic feature to retain the details of the objects.
In contrast, the semantic information is effectively reduced in illumination features. These features reflect the illumination conditions in the original images to a certain extent. Intuitively, the pink parts in the features represent the bright illumination while the green parts represent the dark illumination. Such well-separated representations lay the foundation for the good performance of our model.
3.6 Template image reconstruction
While the reconstructor serves to obtain informative semantic features during training, it can also retrieve the template images in the inference phase. As shown in the last row of Figure 4
, the reconstructor robustly generates the template images of both the training and test samples, regardless of illumination variance, object deformation, blur, and low resolution of the images. Not only outlines of the symbol contents but also fine-details are well restored in the generated template images, which improves the reconstruction results by VPE. Our results further demonstrate that the proposed model have learned good represents of semantic information for both classification and reconstruction.
So far, our studies have validated the feasibility and effectiveness of illumination-based feature augmentation. The idea of learning good semantic and illumination features before training a classifier is consistent with the thinking of decoupling representation and classifier (Zhang & Yao, 2020). Compared to the existing approaches (Kim et al., 2017, 2019), our method not only achieves the best results on the traffic sign and logo classifications, but also learns intuitively interpretable semantic and illumination representation and performs better reconstructions.
Our method can be widely applied to a series of training scenarios. In the case that the training samples with certain illumination conditions are limited in the dataset, we can augment these samples with that type of illumination features separated from other images (or simply use the illumination features in our repository). Besides, we can utilize the method to expand a few support samples or even only one (e.g., template images) to form a large training dataset, solving the problem of lacking annotated real data. Overall, the imbalance both in size and illumination conditions of the dataset could be alleviated since we can transplant illumination information to specific training samples with a limited number and illumination diversity.
Here is one question that why we do not classify a test sample by its semantic feature after the separation. Actually, we have to train the classifier with the augmented samples because generally there are not many support samples in few-shot or one-shot scenarios. If we trained the classifier with a few support samples, it would be poor in generalization due to the memorization of deep networks (Arpit et al., 2017). While when we extend the volume and diversity of the feature set by illumination augmentation, the trained model can be more generalizable.
Our work can be improved from the following aspects.
First, it should be noted that the illumination features learned by our model seem to reflect relative illumination intensity rather than fine details, limited by the lack of illumination supervision. The constraint used in our method improves the quality of illumination features to some extent and thus enhances the model performance. However, alternative disentanglement methods with more stringent constraints or pretraining on illumination supervised data, can be applied to obtain refined illumination representation.
Second, the spatial transformation network (STN) can be substituted by other networks in the matching module. In traffic sign classifications, STN can well correct the semantic features to be consistent with that of the templates. However, it is sometimes difficult to deal with the non-rigid deformation in logo datasets. Furthermore, general objects might be distinct with the templates in many aspects, such as color variation and changes in visual angles. Two ways can be considered. First, we can choose several different templates for different types of variation. Second, we can develop general networks to deal with such transformations. For instance, we can translate the objects along directions (e.g., color and visual angles) in the feature space to the templates via semantic transformations(Wang et al., 2020).
5 Realated works
Data augmentation is an effective data-space solution to the problem of limited data (Shorten & Khoshgoftaar, 2019). Augmentations based on data warping transform existing images by some methods while preserving the original labels (LeCun et al., 1998; Zheng et al., 2019). Oversampling augmentations enhance the datasets by generating synthetic training samples (Inoue, 2018; Bowles et al., 2018).
In this work, we propose a method of feature space augmentation. This kind of augmentations implement the transformation in a learned feature space rather than the input space (DeVries & Taylor, 2017). Recently, augmentation methods on semantic feature space are proposed to regularize deep networks (Wang et al., 2020; Bai et al., 2020). Unlike these methods, we augment the samples with interpretable illumination representation in an easier way.
Few-shot learning. Early efforts for few-shot learning were based on generative models that sought to build the Bayesian probabilistic framework (Fei-Fei et al., 2006). Recently, more attention was paid on meta-learning, which can be generally summarized into five sub-categories: learn-to-measure (e.g., MatchNets (Vinyals et al., 2016), ProtoNets (Snell et al., 2017)), learn-to-finetune (e.g., MAML (Finn et al., 2017)), learn-to-remember (e.g., SNAIL (Mishra et al., 2018)), learn-to-adjust (e.g., MetaNets (Munkhdalai & Yu, 2017)) and learn-to-parameterize (e.g., DynamicNets (Gidaris & Komodakis, 2018)). In this work, we used tasks similar to one-shot learning to evaluate our method.
In this paper, we develop a novel neural network architecture named Separating-Illumination Network (Sill-Net). The illumination features can be well separated from training images by Sill-Net. These features can be used to augment the support samples. Our method outperforms the state-of-the-art (SOTA) methods by a large margin in several benchmarks. In addition to these improvements in visual applications, the results demonstrate the feasibility of the illumination-based augmentation method in the feature space in object recognition, which is a potential research direction about data augmentation.
- Arpit et al. (2017) Arpit, D., Jastrzębski, S., Ballas, N., Krueger, D., Bengio, E., Kanwal, M. S., Maharaj, T., Fischer, A., Courville, A., Bengio, Y., et al. A closer look at memorization in deep networks. In International Conference on Machine Learning, pp. 233–242. PMLR, 2017.
- Bai et al. (2020) Bai, H., Sun, R., Hong, L., Zhou, F., Ye, N., Ye, H.-J., Chan, S.-H. G., and Li, Z. Decaug: Out-of-distribution generalization via decomposed feature representation and semantic augmentation. arXiv preprint arXiv:2012.09382, 2020.
- Bowles et al. (2018) Bowles, C., Chen, L., Guerrero, R., Bentley, P., Gunn, R., Hammers, A., Dickie, D. A., Hernández, M. V., Wardlaw, J., and Rueckert, D. Gan augmentation: Augmenting training data using generative adversarial networks. arXiv preprint arXiv:1810.10863, 2018.
- DeVries & Taylor (2017) DeVries, T. and Taylor, G. W. Dataset augmentation in feature space. arXiv preprint arXiv:1702.05538, 2017.
- Fei-Fei et al. (2006) Fei-Fei, L., Fergus, R., and Perona, P. One-shot learning of object categories. TPAMI, 28(4):594–611, 2006.
- Finn et al. (2017) Finn, C., Abbeel, P., and Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017.
- Gidaris & Komodakis (2018) Gidaris, S. and Komodakis, N. Dynamic few-shot visual learning without forgetting. In CVPR, 2018.
He et al. (2016)
He, K., Zhang, X., Ren, S., and Sun, J.
Deep residual learning for image recognition.
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
- Inoue (2018) Inoue, H. Data augmentation by pairing samples for images classification. arXiv preprint arXiv:1801.02929, 2018.
Jaderberg et al. (2015)
Jaderberg, M., Simonyan, K., Zisserman, A., et al.
Spatial transformer networks.In Advances in neural information processing systems, pp. 2017–2025, 2015.
- Joly & Buisson (2009) Joly, A. and Buisson, O. Logo retrieval with a contrario visual query expansion. In Proceedings of the 17th ACM international conference on Multimedia, pp. 581–584, 2009.
- Kim et al. (2017) Kim, J., Lee, S., Oh, T.-H., and Kweon, I. S. Co-domain embedding using deep quadruplet networks for unseen traffic sign recognition. arXiv preprint arXiv:1712.01907, 2017.
- Kim et al. (2019) Kim, J., Oh, T.-H., Lee, S., Pan, F., and Kweon, I. S. Variational prototyping-encoder: One-shot learning with prototypical images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9462–9470, 2019.
- Krizhevsky et al. (2017) Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017.
- LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- Letessier et al. (2012) Letessier, P., Buisson, O., and Joly, A. Scalable mining of small visual objects. In Proceedings of the 20th ACM international conference on Multimedia, pp. 599–608, 2012.
- Mishra et al. (2018) Mishra, N., Rohaninejad, M., Chen, X., and Abbeel, P. A simple neural attentive meta-learner. In ICLR, 2018.
- Munkhdalai & Yu (2017) Munkhdalai, T. and Yu, H. Meta networks. In ICML, 2017.
- Romberg et al. (2011) Romberg, S., Pueyo, L. G., Lienhart, R., and Van Zwol, R. Scalable logo recognition in real-world images. In Proceedings of the 1st ACM International Conference on Multimedia Retrieval, pp. 1–8, 2011.
- Russakovsky et al. (2015) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
Shorten & Khoshgoftaar (2019)
Shorten, C. and Khoshgoftaar, T. M.
A survey on image data augmentation for deep learning.Journal of Big Data, 6(1):60, 2019.
- Simonyan & Zisserman (2014) Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- Snell et al. (2017) Snell, J., Swersky, K., and Zemel, R. Prototypical networks for few-shot learning. In NIPS, 2017.
- Stallkamp et al. (2012) Stallkamp, J., Schlipsing, M., Salmen, J., and Igel, C. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural networks, 32:323–332, 2012.
- Su et al. (2017) Su, H., Zhu, X., and Gong, S. Deep learning logo detection with data expansion by synthesising context. In 2017 IEEE winter conference on applications of computer vision (WACV), pp. 530–539. IEEE, 2017.
- Suter et al. (2019) Suter, R., Miladinovic, D., Schölkopf, B., and Bauer, S. Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness. In International Conference on Machine Learning, pp. 6056–6065. PMLR, 2019.
- Tabelini et al. (2020) Tabelini, L., Berriel, R., Paixão, T. M., De Souza, A. F., Badue, C., Sebe, N., and Oliveira-Santos, T. Deep traffic sign detection and recognition without target domain real images. arXiv preprint arXiv:2008.00962, 2020.
- Vinyals et al. (2016) Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al. Matching networks for one shot learning. In NIPS, 2016.
- Wang et al. (2020) Wang, Y., Huang, G., Song, S., Pan, X., Xia, Y., and Wu, C. Regularizing deep networks with semantic data augmentation. arXiv preprint arXiv:2007.10538, 2020.
- Xiao et al. (2018) Xiao, T., Hong, J., and Ma, J. Elegant: Exchanging latent encodings with gan for transferring multiple face attributes. In Proceedings of the European conference on computer vision (ECCV), pp. 168–184, 2018.
- Zhang & Yao (2020) Zhang, H. and Yao, Q. Decoupling representation and classifier for noisy label learning. arXiv preprint arXiv:2011.08145, 2020.
- Zhang et al. (2017) Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
- Zheng et al. (2019) Zheng, X., Chalasani, T., Ghosal, K., Lutz, S., and Smolic, A. Stada: Style transfer as data augmentation. arXiv preprint arXiv:1909.01056, 2019.
- Zhu et al. (2016) Zhu, Z., Liang, D., Zhang, S., Huang, X., Li, B., and Hu, S. Traffic-sign detection and classification in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2110–2118, 2016.