VISER: Visual Self-Regularization

02/07/2018 ∙ by Hamid Izadinia, et al. ∙ University of Washington 0

In this work, we propose the use of large set of unlabeled images as a source of regularization data for learning robust visual representation. Given a visual model trained by a labeled dataset in a supervised fashion, we augment our training samples by incorporating large number of unlabeled data and train a semi-supervised model. We demonstrate that our proposed learning approach leverages an abundance of unlabeled images and boosts the visual recognition performance which alleviates the need to rely on large labeled datasets for learning robust representation. To increment the number of image instances needed to learn robust visual models in our approach, each labeled image propagates its label to its nearest unlabeled image instances. These retrieved unlabeled images serve as local perturbations of each labeled image to perform Visual Self-Regularization (VISER). To retrieve such visual self regularizers, we compute the cosine similarity in a semantic space defined by the penultimate layer in a fully convolutional neural network. We use the publicly available Yahoo Flickr Creative Commons 100M dataset as the source of our unlabeled image set and propose a distributed approximate nearest neighbor algorithm to make retrieval practical at that scale. Using the labeled instances and their regularizer samples we show that we significantly improve object categorization and localization performance on the MS COCO and Visual Genome datasets where objects appear in context.



There are no comments yet.


page 2

page 3

page 5

page 6

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image recognition has rapidly progressed in the last five years. It was shown in the ground-breaking work of [22]

that deep convolutional neural networks (CNNs) are extremely effective at recognizing objects and images. The development of deeper neural networks with over a hundred layers has kept improving performance on the ImageNet dataset 

[7], and we have arguably achieved human performance on this task [35]. These developments have become mainstream and may lead to the perception of image recognition as a solved problem. However, image recognition remains an area of active research. ImageNet is indeed biased towards single objects appearing in the middle of the image, which is in contrast with the photos we take with our mobile phones that typically contain a range of objects that appear in context. Also, the list of object categories in ImageNet is a subset of the lexical database WordNet [29]. This makes ImageNet biased towards certain categories such as breeds of dogs, and does not match the scope of more general image recognition tasks such as object detection and localization in context.

Figure 1: The t-SNE [27] map of the whole set of images (including MS COCO and YFCC images) labeled as ‘Bus’ category after applying our proposed ViSeR approach. Can you guess whether green or blue background correspond to the human annotated images of MS COCO dataset?

Answer key: blue: MS COCO, green: YFCC

Datasets such as MS COCO [25] or Visual Genome [21] have been constructed such that photos are typically composed of multiple objects appearing at a variety of positions and scales. They provide a more realistic benchmark for image recognition systems that are intended for consumer photography products such as Flickr or Google Photos. MS COCO currently contains  300K images and 80 object categories, whereas Visual Genome contains  100K images and thousands of object categories. CNNs are also showing the best performance on these datasets [34, 21]. As training deep neural networks requires a large amount of data and the size of MS COCO and Visual Genome is an order of magnitude smaller than ImageNet, the CNN weights are initialized using the weights of a model that was originally trained on ImageNet. In this paper we focus on improving image recognition performance on MS COCO and Visual Genome.

The labels in MS COCO and Visual Genome are obtained via crowdsourcing platforms such as Amazon Mechanical Turk. Hence it is time-consuming and expensive to obtain additional labels. However, we have access to huge quantities of unlabeled or weakly labeled images. For example, the Yahoo Flickr Creative Commons 100M dataset (YFCC) [40] is comprised of a hundred million Flickr photos with user-provided annotations such as photo tags, titles, or descriptions.

In this paper, we present a simple yet effective semi-supervised learning algorithm that is able to leverage labeled and unlabeled data to improve classification accuracy on the MS COCO and Visual Genome datasets. We first train a fully convolutional network using the multi-labeled data (e.g. MS COCO or Visual Genome). Then, we retrieve for each training sample the nearest samples in YFCC using the cosine similarity in a semantic space of the penultimate layer in the trained fully convolutional network. We call these

Regularizer samples which can be considered as real perturbed samples compared to the Gaussian noise perturbation considered in virtual adversarial training [31]. Having access to a large set of unlabeled data is critical for finding representative regularizer samples for each training instance. For making this approach practical at scale, we propose an approximate distributed algorithm to find the images with semantically similar attention activation. We then fine-tune the network using the labeled instances and Regularizer samples. Our experimental results show that we significantly improve performance over previous methods where models are trained using only the labeled data. We also demonstrate how our approach is applicable to object-in-context retrieval.

Figure 2:

We use a Fully Convolutional Network to simultaneously categorize images and localize the objects of interest in a single forward pass. The last layer of the network produces an tensor of N heatmaps for localizing objects where each corresponds to one of the N


object. The green areas correspond to regions with high probability for the object produced by our network.

2 Related work

The recognition and detection of objects that appear “in context” is an active area of research. The most common benchmarks for this task are the PASCAL VOC  [10] and MS COCO datasets. Deep convolutional neural networks have been shown to provide optimal performance in this setting with state-of-the-art performance results for object detection in [34]. It has recently been shown in [33, 38, 9, 3, 42]

that it is possible to accurately classify and localize objects using training data that does not contain any object bounding box information. We refer to this training data that does not contain the location information of the object as


The size of labeled "objects in context" datasets is typically small. For example, MS COCO has around 300,000 images and Visual Genome has over 100,000 images. However, we have access to large amounts of unlabeled web images. The Yahoo Flickr Creative Commons 100M dataset has one hundred million images that have user annotations such as tags, titles, and description. There has been some recent efforts to leverage this user annotation to build object classifiers. For instance, [18] proposes a noise model that is able to better capture the uncertainty in the user annotations and improve the classification performance. It is shown in [19] that it is possible to learn state-of-the-art image features when training a convolutional neural network from a random initialization using user annotations as target labels. In [12], the authors also train deep neural networks from scratch and use the output layers as classifiers directly. However, classifier performance is lower when training on noisy data. Contrary to these approaches, we propose a form of curriculum learning [4] where we first train a model on a small set of clean data, and then augment the training set by mining instances from a large set of unlabeled images.

While it is shown that by making small perturbations to the input it is possible to make adversarial examples which can fool machine learning models 

[39, 23, 24], adversarial examples can be used as a means for data augmentation to improve the regularization capability of the deep models. Our method is related to adversarial training techniques [13, 31, 30] in the sense that additional training instances with small perturbations are created and added to the training data. However, in contrast to those methods, we retrieve real adversarial examples from a large set of unlabeled images. Our examples are thus real image instances which possess high correlation to the labeled data in the semantic space determined by the penultimate layer of the neural network after the first phase of training. Such instances usually correspond to large perturbations in the input space but follow the natural distribution of the data which is analogous to the adversarial perturbations. We call our retrieved image instances as Regularizer and show that the Regularizer instances can be used to re-train the model and further improve performance.

Semi-supervised learning is the class of algorithms where classifiers are trained using labeled and unlabeled data. A number of approaches have been proposed in this setting such as Naive Bayes and EM algorithm 

[32], ensemble methods [5] and propagating labels based on similarity as in [43]. In our case the size of the unlabeled set is three orders of magnitude larger than the size of the labeled set. Existing methods are therefore impractical, and we propose a simple method to propagate labels using a nearest neighbor search. The metric is the cosine distance in the space defined by the penultimate layer of the fully convolutional neural network after it has been trained on the clean dataset. We argue that the size of our unlabeled set is critical in order for the label propagation to work effectively, and we propose approximations using MapReduce to make the search practical [6].

Large-scale nearest neighbor search is commonly used in the computer vision community for a variety of tasks such as scene completion 

[16], image editing with the PatchMatch algorithm [2], or image annotation with the TagProp algorithm [15]. Techniques such as TagProp [15] have been proposed to transfer tags from labeled to unlabeled images. In this work we take advantage of the powerful image representation from a deep neural network to transfer labels as well as regularize training. Similarly labels can be propagated using semantic segmentation [14]. This method is applied on ImageNet which has a bias towards a single object appearing in the center of the image. We focus here on images where objects appear in context.

Nearest neighbor search has also been shown to be successful in other computer vision applications which involve other modalities. For example, in [8] the performance of several nearest neighbor methods is examined on the image captioning task. By conducting extensive experiments, the results of [8] have shown that nearest neighbor approaches can perform as good as state-of-the-art methods for image captioning.

3 Proposed Method

3.1 Fully Convolutional Network Architecture

Most recent developments in image recognition have been driven by optimizing performance on the ImageNet dataset. However, images in this dataset have a bias for single objects appearing in the center of the image. In order to increase performance on photos where multiple objects may appear at different scales and position, we adopt a fully convolutional neural network architecture inspired by [26]. Each fully connected layer is replaced with a convolutional layer. Hence, the output of the network is a tensor where the width and height depend on the input image size and is the number of object classes. For each object class the corresponding heatmap provides information about the object’s location as illustrated in Figure 2. In our experiments we use the base architecture of VGG16 [36] shown in Figure 2.

3.2 Multiple instance learning for multilabel classification

We are given a set of annotated images , where is an image and

is a binary vector determining which object category labels are present in

. Let be the object heatmap for the th label in the final layer of the network. The probability at location

is given by applying a sigmoid unit to the logits

, e.g. .

We do not have access to the location information of the objects since we are in a weakly labeled setting. Therefore, to compute the probability score for the th object category to appear at the th location, we incorporate a multiple instance learning approach with Noisy-OR operation [28, 41, 11]. The probability for label is given by Equation 1

. Also, for learning the parameters of the FCN, we use stochastic gradient descent to minimize the cross-entropy loss

formalized in Equation 2.


3.3 Visual Self-Regularization

It has been observed that deep neural networks are vulnerable to adversarial examples [39]. Let be an image and a small perturbation such that

. If the perturbation is aligned with the gradient of the loss function

which is the most discriminative direction in the image space, then the output of the network may change dramatically, even though the perturbed image is virtually indistinguishable from the original. Goodfellow et al. suggest that this is due to the linear nature of deep neural networks. They also show that augmenting the training set with adversarial examples results in regularization similar to dropout[13].

In Virtual Adversarial Training [31] the perturbation is produced by maximizing the smoothness of the local model distribution around each data point. This method does not require the labels for data perturbations and can also be used in semi-supervised learning. The virtual adversarial example is the point in an

ball around the datapoint that maximally perturbs the label distribution around that point as measured by the Kullback-Leibler divergence

Figure 3: Top regularizer examples from unlabeled YFCC dataset (row 2-6) that are retrieved for multi-label image queries in several of the MS COCO categories (first row).

We propose to draw perturbations from a large dataset of unlabeled images whose cardinality is much higher than . For each example , we use the example that is nearby in the space defined by the penultimate layer in our fully convolutional network. This layer contains spatial and semantic information about the objects present in the image, and therefore and have similar semantics and composition while they may be far away in pixel space. We consider the cosine similarity metric to find samples which are close to each other in the feature space and for efficiency we compute the dot product of the L2 normalized feature vectors. Let denote the optimal parameters found after minimizing the cross-entropy loss using the training data in , and be the L2 normalized feature vector obtained from the penultimate layer of our network(Conv(1,1,2048)). The similarity between two images and is then computed by their dot product . For each training sample in , we find the most similar item in

function Map() : sample index in , : image data
     Compute network output layer
     Compute similarities with samples in :
     Sort by descending similarity values
     for  to  do
     end for
end function
function Reduce() : sample index in , : Iterator over (sample index in , similarity score) tuples
     Sort by descending similarity values
     for  to  do
     end for
end function
Algorithm 1 Distributed Regularizer Sample Search

and transfer the labels from , to generate a new, Real Adversarial (Regularizer), training sample . Similar to adversarial and virtual adversarial training, our method improves the classification performance. We interpret our sample perturbation as a form of adversarial training where additional examples are sampled from a similar semantic distribution as opposed to noise. We also used the perturbation of each labeled sample in the gradient direction (similar to adversarial training) to find the nearest neighbor in unlabeled set and observed similar performance. Therefore, in this paper our focus is on using the labeled samples for finding Regularizer instances to improve performance.

3.4 Large scale approximate regularizer sample search

In our experiments we use the YFCC dataset as our set of unlabeled images. Since it contains 100 million images, an exhaustive nearest neighbor search is impractical. Hence we use the MapReduce framework [6] to find approximate nearest neighbors in a distributed fashion. Our approach is outlined in Algorithm 1. We first pre-compute the feature representations for . The size of for datasets such as MS COCO or Visual Genome is small enough that it is possible for each mapper to load a copy into memory. A mapper then iterates over samples in , computes the feature representation and its inner product with the pre-computed features in . It emits tuples for the top matches that are keyed by the index in , and also contain the index in and similarity score. After the shuffling phase, the reducers can select for each sample in the closest samples in . We use and . We are able to run the search in a few hours, with the majority of the time being in the mapper phase where we compute the image feature representation. Note that our method does not guarantee that we can retrieve the nearest neighbor for each sample in . Indeed, if for a sample there exists samples such that , then the algorithm will output either no nearest neighbor or another sample in . However we found our approximate method to work well in practice.

Figure 4: Object localization comparison between “FCN,N-OR”(mid row) and “FCN,N-OR,ViSeR”(last row).

4 Experiments

4.1 Semi-Supervised Multilabel Object Categorization and Localization

We use the MS COCO [25] and Visual Genome [21] datasets as our source of clean training data as well as for evaluating our algorithms. MS COCO has 80 object categories and is a common benchmark for evaluating object detectors and classifiers in images where objects appear in context. The more recent Visual Genome dataset has annotations for a larger number of categories than MS COCO. Applying our proposed method on the Visual Genome dataset is important to understand whether the algorithm scales to a larger number of categories, as it is ultimately important to recognize thousands of object classes in real world applications. All images for both datasets come from Flickr. In all experiments we only use the image labels for training our models and discard image captions and bounding box annotations.

For the MS COCO dataset we use the standard split used in [25] for training and evaluating the models. The training set contains 82,081 images and validation set has 40,504 images. For the Visual Genome dataset we only use object category annotations for images. The images are labeled as a positive instance for each object if the area ratio of the bounding box with regards to the image area is more than 0.025. We only consider the 1,432 object categories for which there are at least 80 image instances in the training set. The Visual Genome test set is the intersection of Visual Genome with the MS COCO validation set which is comprised of 17,471 images. We use the remaining 90,606 images for training our models. As for the source of unlabeled images, we use the YFCC dataset [40] and discard the images that are present in Visual Genome or MS COCO. The data is 14TB and is stored in the Hadoop Distributed File System.

We use the TensorFlow software library

[1] to implement our neural networks and conduct our experiments. To conduct the distributed nearest-neighbor search, we use a CPU cluster. We use VGG16 architecture pre-trained for the ImageNet classification task as our base network. We resize images to 500

500. Our initial learning rate is 0.01 and we apply the 0.1 decay factor for adapting the learning rate two times during the training after 20K and 40K mini-batches. We run stochastic gradient descent for 60K iterations with mini-batches of size 15 which corresponds to 11 epochs.

We conduct our experiments on the object classification and point-based object localization tasks. As for the object evaluation metric, we use the mean Average Precision (AP) metric where we first compute the precision for each class and then take the average over classes. For evaluating our object localization, we use the point localization metric introduced in  

[33], where the location for a given class is given by the location with maximum response in the corresponding object heatmap. The location is considered correct if it is located inside the bounding box associated with that class. Similar to [33] we use a tolerance of 18 pixels.

Tables 1 and  2 summarize our results with the mean AP for the classification and localization tasks on the MS COCO and Visual Genome datasets. We compare our performance with three state-of-the-art methods for object localization and classification [33, 38, 3]. In [33]

, to handle the uncertainty in object localization, the last fully connected layers of the network are considered as convolution layers and a max-pooling layer is used to hypothesize the possible location of the object in the images. In contrast, we use Noisy-OR as our pooling layer. In 

[38], a multi-scale fully convolutional neural network called ProNet is proposed that aims to zoom into promising object-specific boxes for localization and classification. We compare against the different variants of ProNet with chain and tree cascades. Our method uses a single fully convolutional network, is simpler and has a lighter architecture as compared to ProNet. In all tables ‘FullyConn’ refers to the standard VGG16 architecture while ’FullyConv’ refers to the fully convolutional version of our network (see Figure 2). The Noisy-OR loss is abbreviated as ’N-OR’, and we denote our algorithm with ViSeR.

We can see in Table 1 that our proposed algorithm reaches accuracy in the object localization task on the MS COCO dataset which is more than a boost over [38] and a boost over [33]. Also, without doing any regularization and by only using Noisy OR (N-OR) paired with a fully convolutional network, we obtain higher localization accuracy than Oquab et al. [33] and different variants of ProNet [38].

In the object classification task, our proposed ViSeR approach outperforms other state-of-the-art baselines of  [38, 33] by a margin of more than and gains an accuracy of for the MS COCO dataset. In addition, other variants of [38] are less accurate than our fully convolutional network architecture with Noisy-OR pooling (‘FullyConv, N-OR’). This result is consistent with the results we obtained in the object localization task. While the method of [3] obtains competitive performance on the MS COCO localization task, our method outperforms it in the classification task by a large margin of . A recent method [9] obtains classification and localization accuracy of and respectively using the deeper ResNet [17] architecture. Hence it is not directly comparable with ProNet [38][3] and our method which use the VGG [36] as base network architecture. In addition our proposed method has a label propagation step which produces a large set of labeled images with object level localization in “object in context” scenes and can be used in other learning methods. Also, the method of [9] is based on a new pooling mechanism while our method proposes a better regularization for training ConvNets using a large scale set of unlabeled images in a semi-supervised setting and therefore is orthogonal to [9]. We also perform an ablation study and compare against other forms of regularization using our fully convolutional network architecture with Noisy-OR pooling (‘FullyConv, N-OR’). In Table 1 and  2, we compare three forms of regularization: adversarial training (‘AT’) [13], virtual adversarial training (‘VAT’) [31], and our proposed Visual Self-Regularization (ViSeR) using the YFCC dataset as source of unlabeled images.

To conclude, our proposed approach outperforms state-of-the-art methods as well as several baselines by a substantial margin in object classification and localization tasks according to the results shown in Table 1 and Table 2. Hence, the regularization mechanism of our proposed method results in a performance boost compared to the other forms of adversarial example data augmentation. We show that visual self-regularizers (ViSeR) make our learning robust to noise and provides better generalization capabilities.

Method Classification Localization
Oquab et al. [33] 62.8 41.2
ProNet (proposal) [38] 67.8 43.5
ProNet (chain cascade) [38] 69.2 45.4
ProNet (tree cascade) [38] 70.9 46.4
Bency et al. [3] 54.1 49.2
FullyConn 66.68
FullyConv,N-OR 72.52 47.47
FullyConv,N-OR,AT [13] 74.38 49.75
FullyConv,N-OR,VAT [31] 74.30 49.42
FullyConv,N-OR,ViSeR 75.48 50.64
Table 1: Mean AP for classification and localization tasks on the MS COCO dataset (higher is better).

4.2 Object-in-Context Retrieval

To qualitatively evaluate ViSeR, we show several examples of the Regularizer instances retrieved using our approach in Figure 3. For each of the labeled images shown in the first row of Figure 3, we show the top 5 retrieved images. As we can see, the unlabeled images retrieved by our approach have high similarity with the queried labeled image. Furthermore, most of the objects in the labeled images also appear in the retrieved images. This observation qualitatively demonstrates the effectiveness of our label propagation approach. It is worth mentioning that Figure 3 shows that the relative location of the objects in the retrieved images is fairly consistent with that of the query images. This suggests that our simultaneous categorization and localization approach can also be used for propagating bounding box annotations.

Method Classification Localization
FullyConn 9.94
FullyConv,N-OR 12.35 7.55
FullyConv,N-OR,AT [13] 13.96 9.05
FullyConv,N-OR,VAT [31] 13.95 9.06
FullyConv,N-OR,ViSeR 14.82 9.74
Table 2: Mean AP for classification and localization tasks on the Visual Genome dataset (higher is better).

Figure 1 shows the results of our ViSeR approach on the ‘Bus’ category. We visualize the t-SNE [27] map of the whole set of images labeled as ‘Bus’ which includes instances from both the labeled images in the MS COCO and unlabeled instances from the YFCC dataset. To produce the t-SNE visualization we take the output of the penultimate layer of our network as explained in Section 3

. We L2 normalize the feature vectors to compute the pairwise cosine similarity between images using a dot product. We visualize the t-SNE map using a grid 

[20]. A different background color (blue vs. green) is assigned to images depending on whether they are from the labeled or unlabeled set. Notice that it is challenging to determine the color corresponding to each dataset as photos are from a similar domain. The images with a blue background belong to the MS COCO dataset and the images with a green background belong to the YFCC dataset. This visualization reveals that there are many images in the large unlabeled web resources that can potentially be used to populate the fully annotated dataset with more examples. This is a step forward for improving object categorization as well as decreasing human effort for supervision.

Figure 5: Generalization comparison on a synthetic dataset between proposed ViSeR , dropout, adversarial training, and virtual adversarial training (VAT). Training samples are shown with black borders and rest of instances are test set. Each plot demonstrates the contour of the , from (blue) to (red).
cross entropy dropout [37] AT [13] VAT [31] ViSeR
Error(%) 9.2440.651 9.2620.706 8.9600.849 8.9400.393 8.5080.493
Table 3: Classification error on test synthetic dataset (lower is better).

Figure 6 demonstrates the qualitative performance of “FCN,N-OR,ViSeR” for multi-label localization. We visualize the object localization score maps where the localization regions with high probability are shown in green. We also display the localized objects using red dots. The score maps show that our approach can accurately localize small and big objects even in extreme cases where a big portion of the object is occluded. In the first row of Figure 6 ‘dog’ and ‘laptop’ are localized quite accurately while they are largely occluded and truncated. Similarly, the third row shows the accurate localization of a ‘chair’ although it appears in a small region of the image and is largely occluded. When there are multiple instances of an object category, such as ‘person’ in the second row, ‘potted plant’ in the third row, and ‘car’ in the sixth row, all regions corresponding to these instances get a high score.

The failure cases of our approach are distinguished via red boxes in Figure 6. For instance, the ‘skateboard’ in row 6 is localized around the region close to the person’s leg. In row 8, although the ‘backpack’ region gets a high score map, it fails to contain the highest peak and thus the localization metric considers it as a mistake.

We show several examples of the localization score maps produced by “FCN,N-OR” and “FCN,N-OR,ViSeR” in Figure 4. By comparing the localized regions in green, we see that “FCN,N-OR,ViSeR” can locate both small and big objects more accurately. For example, in localizing small objects such as ‘tie’, ‘bottle’ and ‘remote’, the peak of the localization region produced by “FCN,N-OR” has a large distance with the correct location of the object while “FCN,N-OR,ViSeR” localizes these objects precisely. Also, “FCN,N-OR” fails to be as accurate as “FCN,N-OR,ViSeR” in localizing big objects such as ‘umbrella’ and ‘refrigerator’.

4.3 Classification on Synthetic Data

In order to evaluate the ability of our algorithm to leverage unlabeled data to regularize learning, we generate a synthetic two-class dataset with a multimodal distribution. The dataset contains 16 training instances (each class has 8 modes with random mean and covariance for each mode and 1 random sample per mode is selected), 1000 unlabeled and 1000 test samples. We linearly embed the data in 100 dimensions. Since the data has different modes, we can mimic the object categorization task where each object category appears in a variety of shapes and poses, each of which can be considered as a mode in the distribution.

Figure 6: Localization results of the proposed model on MS COCO validation set. Localization results of our proposed model trained on MS COCO training set and YFCC100M as the source of unlabeled set. The score map and localization of positive categories are overlaid on each image. Some failure examples are highlighted with red box for the skateboard, handbag and backpack object categories.

We use a multi layer neural network with two fully connected layers of size 100, each followed by a ReLU activation and optimized via the cross-entropy loss. We compare the generalization behavior of

ViSeR with the following regularization methods: dropout [37], adversarial training [13], and virtual adversarial training (VAT) [31]

. The contour visualization of the estimated model distribution is shown in Figure 

5. We can see that both adversarial training and virtual adversarial training are vulnerable to the location of the training sample of each mode. These regularization techniques can learn a good boundary of the class when the training instance is at the center of the mode, but they over-smooth the boundary whenever the training instance is off-center. However, our proposed ViSeR sampling from unlabeled data learns a better local class distribution as adversarial samples follow the true distribution of the data and are less biased to the training instances. The dropout technique is also learning a good regularization, but it is less smooth at the boundaries of the local modes. Table 3 summarizes the misclassification error on test data over 50 independent runs on the synthetic dataset.

5 Conclusion and Future Work

In this paper we have presented a simple yet effective method to leverage a large unlabeled dataset in addition to a small labeled dataset to train more accurate image classifiers. Our semi-supervised learning approach is able to find Regularizer examples from a large unlabeled dataset. We have achieved significant improvements on the MS COCO and Visual Genome datasets for both the classification and localization tasks. The performance of our approach could be further improved in future work by incorporating user provided data such as ‘tags’. Also, having access to a large set of unlabeled data is fairly common in other domains and hence we believe our approach could be applicable beyond visual recognition.


  • [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous systems. 2015.
  • [2] C. Barnes, E. Shechtman, A. Finkelstein, and D. Goldman. Patchmatch: A randomized correspondence algorithm for structural image editing. ACM TOG, 2009.
  • [3] A. J. Bency, H. Kwon, H. Lee, S. Karthikeyan, and B. Manjunath.

    Weakly supervised localization using deep feature maps.

    In ECCV, 2016.
  • [4] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In ICML, 2009.
  • [5] K. P. Bennett, A. Demiriz, and R. Maclin. Exploiting unlabeled data in ensemble methods. In ACM SIGKDD, 2002.
  • [6] J. Dean and S. Ghemawat. Mapreduce: simplified data processing on large clusters. Communications of the ACM, 2008.
  • [7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  • [8] J. Devlin, S. Gupta, R. Girshick, M. Mitchell, and C. L. Zitnick. Exploring nearest neighbor approaches for image captioning. arXiv preprint arXiv:1505.04467, 2015.
  • [9] T. Durand, T. Mordan, N. Thome, and M. Cord. Wildcat: Weakly supervised learning of deep convnets for image classification, pointwise localization and segmentation. In CVPR, 2017.
  • [10] M. Everingham, A. Zisserman, C. K. Williams, L. Van Gool, M. Allan, C. M. Bishop, O. Chapelle, N. Dalal, T. Deselaers, G. Dorkó, et al. The pascal visual object classes challenge 2007 (voc2007) results. 2007.
  • [11] H. Fang, S. Gupta, F. Iandola, R. K. Srivastava, L. Deng, P. Dollár, J. Gao, X. He, M. Mitchell, J. C. Platt, C. L. Zitnick, and G. Zweig. From captions to visual concepts and back. In CVPR, 2015.
  • [12] P. Garrigues, S. Farfade, H. Izadinia, K. Boakye, and Y. Kalantidis. Tag prediction at flickr: a view from the darkroom. arXiv preprint arXiv:1612.01922, 2016.
  • [13] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  • [14] M. Guillaumin, D. Küttel, and V. Ferrari. Imagenet auto-annotation with segmentation propagation. IJCV, 2014.
  • [15] M. Guillaumin, T. Mensink, J. Verbeek, and C. Schmid. Tagprop: Discriminative metric learning in nearest neighbor models for image auto-annotation. In ICCV, 2009.
  • [16] J. Hays and A. A. Efros. Scene completion using millions of photographs. In ACM TOG, 2007.
  • [17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [18] H. Izadinia, B. C. Russell, A. Farhadi, M. D. Hoffman, and A. Hertzmann. Deep classifiers from image tags in the wild. In Multimedia COMMONS, 2015.
  • [19] A. Joulin, L. van der Maaten, A. Jabri, and N. Vasilache. Learning visual features from large weakly supervised data. In ECCV, 2016.
  • [20] A. Karpathy. t-SNE visualization of CNN codes.
  • [21] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, M. Bernstein, and L. Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 2017.
  • [22] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
  • [23] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. ICLR Workshop, 2017.
  • [24] A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial machine learning at scale. In ICLR, 2017.
  • [25] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
  • [26] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
  • [27] L. v. d. Maaten and G. Hinton. Visualizing data using t-sne. JMLR, 2008.
  • [28] O. Maron and T. Lozano-Pérez. A framework for multiple-instance learning. NIPS, 1998.
  • [29] G. A. Miller. Wordnet: a lexical database for english. Communications of the ACM, 1995.
  • [30] T. Miyato, A. M. Dai, and I. Goodfellow. Adversarial training methods for semi-supervised text classification. In ICLR, 2017.
  • [31] T. Miyato, S.-i. Maeda, M. Koyama, K. Nakae, and S. Ishii. Distributional smoothing with virtual adversarial training. In ICLR, 2016.
  • [32] K. Nigam, A. K. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled and unlabeled documents using em. Machine learning, 2000.
  • [33] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Is object localization for free?-weakly-supervised learning with convolutional neural networks. In CVPR, 2015.
  • [34] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015.
  • [35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. IJCV, 2015.
  • [36] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [37] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 2014.
  • [38] C. Sun, M. Paluri, R. Collobert, R. Nevatia, and L. Bourdev. Pronet: Learning to propose object-specific boxes for cascaded neural networks. In CVPR, 2016.
  • [39] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  • [40] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. YFCC100M: The new data in multimedia research. Communications of the ACM, 2016.
  • [41] C. Zhang, J. C. Platt, and P. A. Viola. Multiple instance boosting for object detection. In NIPS, 2005.
  • [42] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Learning deep features for discriminative localization. In CVPR, 2016.
  • [43] X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation. 2002.