Learning Representations by Predicting Bags of Visual Words

02/27/2020 ∙ by Spyros Gidaris, et al. ∙ 0

Self-supervised representation learning targets to learn convnet-based image representations from unlabeled data. Inspired by the success of NLP methods in this area, in this work we propose a self-supervised approach based on spatially dense image descriptions that encode discrete visual concepts, here called visual words. To build such discrete representations, we quantize the feature maps of a first pre-trained self-supervised convnet, over a k-means based vocabulary. Then, as a self-supervised task, we train another convnet to predict the histogram of visual words of an image (i.e., its Bag-of-Words representation) given as input a perturbed version of that image. The proposed task forces the convnet to learn perturbation-invariant and context-aware image features, useful for downstream image understanding tasks. We extensively evaluate our method and demonstrate very strong empirical results, e.g., our pre-trained self-supervised representations transfer better on detection task and similarly on classification over classes "unseen" during pre-training, when compared to the supervised case. This also shows that the process of image discretization into visual words can provide the basis for very powerful self-supervised approaches in the image domain, thus allowing further connections to be made to related methods from the NLP domain that have been extremely successful so far.



There are no comments yet.


page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Learning representations through prediction of Bags of Visual Words. We first train a feature extractor for a self-supervised task, e.g

. rotation prediction. Then we compute a visual vocabulary from feature vectors computed from

feature maps and compute the corresponding image level BoW vectors. These BoW vectors will serve as ground truth for the next stage. In the second stage we perturb images with and send them as input to a second network . The BoW prediction module processes feature maps to predict BoW vectors corresponding to the original non-perturbed images. Both and are trained jointly with cross-entropy loss. The feature extractor is further used for downstream tasks.

The goal of our work is to learn convolutional neural network 

[38] (convnet) based representations without human supervision. One promising approach towards this goal is the so-called self-supervised representation learning [14, 75, 36, 47, 20, 51], which advocates to train the convnet with an annotation-free pretext task defined using only the information available within an image, e.g., predicting the relative location of two image patches [14]. Pre-training on such a pretext task enables the convnet to learn representations that are useful for other vision tasks of actual interest, such as image classification or object detection. Moreover, recent work has shown that self-supervision can be beneficial to many other learning problems [61, 18, 74, 28, 10, 10, 29], such as few-shot [61, 18] and semi-supervised [74, 28] learning, or training generative adversarial networks [10].

A question that still remains open is what type of self-supervision we should use. Among the variety of the proposed learning tasks, many follow the general paradigm of first perturbing an image or removing some part/aspect of the image and then training the convnet to reconstruct the original image or the dropped part (e.g

., color channel, image region). Popular examples are Denoising AutoEncoders 


, Image Colorization 

[75, 36], Split-Brain architectures [76], and Image In-painting [51]

. However, predicting such low-level image information can be a difficult task to solve, and does not necessarily force the convnet to acquire image understanding “skills”, which is what we ultimately want to achieve. As a result, such reconstruction-based methods have not been very successful so far. In contrast, in Natural Language Processing (NLP), similar self-supervised methods, such as predicting the missing words of a sentence (e.g., BERT 

[12] and ROBERTA [40]

), have proven much more successful at learning strong language representations. The difference of those NLP methods with their computer vision counterparts is that (1) words undoubtedly represent more high-level semantic concepts than raw image pixels. Also, (2) words are defined in a discrete space while images in a continuous one where, without changing the depicted content, small pixel perturbations can significantly alter the target of a reconstruction task.

Spatially dense image quantization into visual words.

Inspired by the above NLP methods, in this work we propose for self-supervised learning in the image domain to use tasks that aim at predicting/reconstructing targets that encode

discrete visual concepts as opposed, e.g., to (low-level) pixel information. To build such discrete targets, we first take an existing self-supervised method (e.g., rotation prediction [20]) and use it to train an initial convnet, which can learn feature representations that capture mid-to-higher-level image features. Then, for each image, we densely quantize its convnet-based feature map using a k-means-based vocabulary.111Here, by dense quantization, we refer to the fact that each spatial location of the feature map is quantized separately. This results in a spatially dense image description based on discrete codes (i.e., k-means cluster assignments), called visual words hereafter. Such a discrete image representation opens the door to easily adapting self-supervised methods from the NLP community to the image domain. For instance, in this case, one could very well train a BERT-like architecture that, given as input a subset of the patches in an image, predicts the visual words of the missing patches. Although self-supervised methods of this type are definitely something that we plan to explore as future work, in this paper we aim to go one step further and develop (based on the above discrete visual representations) self-supervised tasks that furthermore allow using standard convolutional architectures that are commonly used (and optimized) for the image domain we are interested in. But how should we go about defining such a self-supervised task?

Learning by “reconstructing” bags of visual words.

To this end, we take inspiration from the so-called Bag-of-Words [70] (BoW) model in computer vision and propose using as self-supervised task one where we wish (to train a convnet) to predict the histogram of visual words of an image (also known as its BoW representation) when given as input a perturbed version of that image. This type of BoW representations have been very powerful image models, and as such have been extensively used in the past in several computer vision problems (including, e.g

., image retrieval, object recognition, and object detection). Interestingly, there is recent empirical evidence that even modern state-of-the-art convnets for image classification exhibit similar behavior to BoW models 

[7]. By using the above BoW prediction task in the context of self-supervised learning, one important benefit is that it is no longer required to enhance a typical convnet architecture for images (e.g., ResNet-50) with extra network components, such as multiple stacks of attention modules as in [64] or PixelCNN-like autoregressors as in [48], that can make the overall architecture computationally intensive. Furthermore, due to its simplicity, it can be easily incorporated into other types of learning problems (e.g

., few-shot learning, semi-supervised learning, or unsupervised domain adaptation), thus allowing to further improve performance for these problems which is an additional advantage.

Concerning the perturbed image (that is used as input to the BoW prediction task), it is generated by applying a set of (commonly used) augmentation techniques such as random cropping, color jittering, or geometric transformations. Therefore, to solve the task of “reconstructing” the BoW histogram of the original image, the convnet must learn to detect visual cues that remain constant (i.e., invariant) to the applied perturbations. Moreover, since the perturbed image can often be only a small part of the original one (due to the cropping transformation), the convnet is also forced to infer the context of the missing input, i.e., the visual words of the missing image regions. This encourages learning of perturbation-invariant and context-aware image features, which, as such, are more likely to encode higher-level semantic visual concepts. Overall, as we show in the experimental results, this has as a result that the proposed self-supervised method learns representations that transfer significantly better to downstream vision tasks than the representations of the initial convnet. As a last point, we note that the above process of defining a convnet-based BoW model and then training another convnet to predict it, can be applied iteratively, which can lead to even better representations.


To summarize, the contributions of our work are: (1) We propose the use of discrete visual word representations for self-supervised learning in the image domain. (2) In this context, we propose a novel method for self-supervised representation learning (Fig. 1). Rather than predicting/reconstructing image-pixel-level information, it uses a first self-supervised pre-trained convnet to densely discretize an image to a set of visual words and then trains a second convnet to predict a reduced Bag-of-Words representation of the image given as input perturbed versions of it. (3) We extensively evaluate our method and we demonstrate that it manages to learn high-quality convnet-based image representations, which are significantly superior to those of the first convnet. Furthermore,

our ImageNet-trained self-supervised ResNet-50 representations, when compared to the ImageNet-trained supervised ones, achieve better VOC07+12 detection performance and comparable Places205 classification accuracy,

i.e., better generalization on the detection task and similar generalization on the Places205 classes which are “unseen” during self-supervised training. (4) The simple design of our method allows someone to easily use it on many other learning problems where self-supervision has been shown to be beneficial.

2 Approach

Our goal is to learn in an unsupervised way a feature extractor or convnet model parameterized by that, given an image , produces a “good” image representation . By “good” we mean a representation that would be useful for other vision tasks of interest, e.g. image classification, object detection. To this end, we assume that we have available a large set of unlabeled images on which we will train our model. We also assume that we have available an initial self-supervised pre-trained convnet . We can easily learn such a model by employing one of the available self-supervised tasks. Here, except otherwise stated, we use RotNet [20] (which is based on the self-supervised task of image rotation prediction) as it is easy to implement and, at the same time, has been shown to achieve strong results in self-supervised representation learning [35].

To achieve our goal, we leverage the initial model to create spatially dense descriptions based on visual words. Then, we aggregate those descriptions into BoW representations and train the model to “reconstruct” the BoW of an image given as input a perturbed version of it. Note that the model remains frozen during the training of the new model . Also, after training , we can set and repeat the training process.

2.1 Building spatially dense discrete descriptions

Given a training image , the first step for our method is to create a spatially dense visual words-based description using the pre-trained convnet . Specifically, let be a feature map (with channels and spatial size) produced by for input , and the -dimensional feature vector at the location of this feature map, where . To generate the description , we densely quantize using a predefined vocabulary of -dimensional visual word embeddings, where is the vocabulary size. In detail, for each position , we assign the corresponding feature vector to its closest (in terms of squared Euclidean distance) visual word embedding :


The vocabulary is learned by applying the k-means algorithm with clusters to a set of feature maps extracted from the dataset , i.e., by optimizing the following objective:


where the visual word embedding is the centroid of the -th cluster.

2.2 Generating Bag-of-Words representations

Having generated the discrete description of image , the next step is to create its BoW representation, denoted by . This is a -dimensional vector whose -th element either encodes the number of times the -th visual word appears in image ,


or indicates if the -th visual word appears in image ,


where is the indicator operator.222In our experiments we use the binary version (4[59, 33] for ImageNet and the histogram version (3) for CIFAR-100 and MiniImageNet. Furthermore, to convert

into a probability distribution over visual words, we

-normalize it, i.e., we set . The resulting can thus be perceived as a soft categorical label of for the visual words. Note that, although might be very large, the BoW representation is actually quite sparse as it has at most non-zero elements.

2.3 Learning to “reconstruct” BoW

Based on the above BoW representation, we propose the following self-supervised task: given an image , we first apply to it a perturbation operator , to get the perturbed image , and then train the model to predict/“reconstruct” the BoW representation of the original unperturbed image from . This, in turn, means that we want to predict the BoW representation from the feature vector (where hereafter we assume that , i.e., the feature representation produced by model is -dimensional).333E.g., in the case of ResNet50, corresponds to the -dimensional feature vector (i.e., ) produced from the global average pooling layer that follows the last block of residual layers. To this end, we define a prediction layer that gets as input and outputs a -dimensional softmax distribution over the

visual words of the BoW representation. More precisely, the prediction layer is implemented with a linear-plus-softmax layer:


where is the softmax probability for the -th visual word, and are the -dimensional weight vectors (one per visual word) of the linear layer. Notice that, instead of directly applying the weights vectors to the feature vector , we use their -normalized versions , and apply a unique learnable magnitude for all the weight vectors ( is a scalar value). The reason for this reparametrization of the linear layer is because the distribution of visual words in the dataset (i.e., how often, or in how many dataset-images, a visual word appears) tends to be unbalanced and, so, without such a reparametrization the network would attempt to make the magnitude of each weight vector proportional to the frequency of its corresponding visual word (thus basically always favoring the most frequently occurring words). In our experiments, the above reparametrization has led to significant improvements in the quality of the learned representations.

Self-supervised training objective.

The training loss that we minimize for learning the convnet model is the expected cross-entropy loss between the predicted softmax distribution and the BoW distribution :


where is the cross-entropy loss for the discrete distributions and , are the learnable parameters of , are the learnable parameters of , and .

Image perturbations.

The perturbation operator that we use consists of (a) color jittering (i.e., random changes of the brightness, contrast, saturation, and hue of an image) (b) converting the image to grayscale with probability , (c) random image cropping, (d) scale or aspect ratio distortions, and (e) horizontal flips. The role served by such an operator is two-fold: to solve the BoW “reconstruction” task after such aggressive perturbations, the convnet must learn image features that (1) are robust w.r.t. the applied perturbations and at the same time (2) allow predicting the visual words of the original image, even for image regions that are not visible to the convnet due to cropping. To further push towards this direction, we also incorporate the CutMix [72] augmentation technique into our self-supervised method. According to CutMix, given two images and , we generate a new synthetic one by replacing a patch of the first image with one from the second image

. The position and size of the patch is randomly sampled from a uniform distribution. The BoW representation that is used as a reconstruction target for this synthetic image is the convex combination of the BoW targets of the two images,

, where is the patch-over-image area ratio. Hence, with CutMix we force the convnet to infer both (a) the visual words that belong on the patch that was removed from the first image , and (b) the visual words that belong on the image area that surrounds the patch that was copied from second image .

Model initialization and iterated training.

We note that the model is used only for building BoW representations and not for initializing the parameters of the model, i.e., is randomly initialized before training. Also, as already mentioned, we can apply our self-supervised method iteratively, using each time the previously trained model for creating the BoW representation. We also note, however, that this is not necessary for learning “good” representations; the model learned from the first iteration already achieves very strong results. As a result, only a few more iterations (e.g., one or two) might be applied after that.

3 Related Work


BoW is a popular method for text document representation, which has been adopted and heavily used in computer vision [59, 11]. For visual content, BoW conveniently encapsulates image statistics from hundreds of local features [41] into vector representations. BoW have been studied extensively and leveraged in numerous tasks, while multiple extensions [52, 31] and theoretical interpretations [63] have been proposed. Due to its versatility, BoW has been applied to pre-trained convnets as well to compute image representations from intermediate feature maps [71, 22, 46], however few works have dealt with the integration of BoW in the training pipeline of a convnet. Among them, NetVLAD [2] mimics the BoW-derived VLAD descriptor by learning a visual vocabulary along with the other layers and soft quantizing activations over this vocabulary. Our method differs from previous approaches in training with self-supervision and in predicting directly the BoW vector bypassing quantization and aggregation.


Self-supervised learning is a recent paradigm aiming to learn representations from data by leveraging supervision from various intrinsic data signals without any explicit manual annotations and human supervision. The representations learned with self-supervision are then further fine-tuned on a downstream task with limited human annotations available. Numerous creative mechanisms for squeezing out information from data in this manner have been proposed in the past few years: predicting the colors of image [36, 75], the relative position of shuffled image patches [14, 47], the correct order of a set of shuffled video frames [45], the correct association between an image and a sound [3], and many other methods [67, 78, 39, 68].

Learning to reconstruct.

Multiple self-supervised methods are formulated as reconstruction problems [65, 43, 34, 78, 21, 76, 51, 75, 1, 54]. The information to be reconstructed can be provided by a different view [21, 78, 54] or sensor [16]. When no such complementary information is available, the current data can be perturbed and the task of the model is now to reconstruct the original input. Denoising an input image back to its original state [65], inpainting an image patch that has been removed from a scene [51] , reconstructing images that have been overlayed [1] are some of the many methods of reconstruction from perturbation. While such approaches display impressive results for the task hand, it remains unclear how much structure they can encapsulate beyond the reconstruction of visual patterns [13]. Similar ideas have been initially proposed in NLP where missing words [43] or sentences [34] must be reconstructed. Another line of research employs perturbations for guiding the model towards assigning the same label to both perturbed and original content [15, 5]. Our method also deals with perturbed inputs, however instead of reconstructing the input, we train it to reconstruct the BoW vector of the clean input. This enables learning non-trivial correlations between local visual patterns in the image.


These works [49, 56] explore the learning of spatially-dense discrete representations with unsupervised generative models with goal of image generation. Instead, we focus on exploiting discrete image representations in the context of self-supervised image representation learning.

3.1 Discussion

Relation to clustering-based representation learning methods [9, 8, 4]. Our work presents similarities to the Deep(er) Clustering approach [9, 8]. The latter alternates between k-means clustering the images based on their convnet features and using the cluster assignments as image labels for training the convnet. In our case however, we use the k-means clustering for creating BoW representations instead of global image labels. The former leads to richer (more complete) image descriptions compared to the latter as it encodes multiple local visual concepts extracted in a spatially dense way. For example, a cluster id is not sufficient to describe an image with multiple objects, like the one in Figure 1, while a BoW is better suited for that. This fundamental difference leads to a profoundly different self-supervised task. Specifically, in our case the convnet is forced to: (1) focus on more localized visual patterns and (2) learn better contextual reasoning (since it must predict the visual words of missing image regions).

Relation to contrastive self-supervised learning methods [15, 5, 25, 44]. Our method bears similarities to recent works exploiting contrastive losses for learning representations that are invariant under strong data augmentation or perturbations  [15, 5, 25, 44]. These methods deal with image recognition and the same arguments mentioned above w.r.t. [9, 8] hold. Our contribution departs from this line of approaches allowing our method to be applied to a wider set of visual tasks. For instance, in autonomous driving, most urban images are similar and differ only by few details, e.g. a pedestrian or a car, making image recognition under strong perturbation less feasible. In such cases, leveraging local statistics as done in our method appears as a more appropriate self-supervised task for learning representations.

4 Experiments and results

We evaluate our method (BoWNet) on CIFAR-100, MiniImageNet [66], ImageNet [58], Places205 [77], VOC07 [17] classification, and V0C07+12 detection datasets.

4.1 Analysis on CIFAR-100 and MiniImageNet

4.1.1 Implementation details

CIFAR-100. CIFAR-100 consists of training images with resolution. We train self-supervised WRN-28-10 [73] convnets using those training images. Specifically, we first train a WRN-28-10 based RotNet [20] and build based on that BoW using the feature maps of its last/3rd residual block. Then, we train the BoWNet using those BoW representations. We use a vocabulary size. The prediction head of RotNet consists of an extra residual block (instead of just a linear layer); in our experiments this led the feature extractor to learn better representations (we followed this design choice of RotNet for all the experiments in our paper; we provide more implementation details in §C.1).

We train the convnets using stochastic gradient descent (SGD) for

epochs of batch iterations and batch size . The learning rate is initialized with and multiplied by after and epochs. The weight decay is .

MiniImageNet. Since MiniImageNet is used for evaluating few-shot methods, it has three different splits of classes, train, validation, and test with , , and classes respectively. Each class has images with resolution. We train WRN-28-4 convnets on the images that correspond to the training classes following the same training protocol as for CIFAR-100.

4.1.2 Evaluation protocols.

CIFAR-100. To evaluate the learned representations we use two protocols. (1) The first is to freeze the learned representations (which in case of WRN-28-10 is a -dimensional vector) and train on top of them a

-way linear classifier for the CIFAR-100 classification task. We use the linear classifier accuracy as an evaluation metric. The linear classifier is trained with SGD for

epochs using a learning rate of that is multiplied by every epochs. The weight decay is . (2) For the second protocol we use a few-shot episodic setting [66] similar to what is proposed on [18]. Specifically, we choose classes from CIFAR-100 and run with them multiple () episodes of -way few-shot classification tasks. Essentially, at each episode we randomly sample classes from the ones and then training examples and test examples per class (both randomly sampled from the test images of CIFAR-100). For we use , , , and examples (-shot, -shot, -shot, and -shot settings respectively). To classify the examples we use a cosine distance Prototypical-Networks [60] classifier that is applied on top of the frozen representations. We report the mean accuracy over the episodes. The purpose of this metric is to analyze the ability of the representations to be used for learning with few training examples. More details about this protocol are provided in §C.3.

MiniImageNet. We use the same two protocols as in CIFAR-100. (1) The first is to train -way linear classifiers on the task of recognizing the training classes of MiniImageNet. Here, we use the same hyper-parameters as for CIFAR-100. (2) The second protocol is to use the frozen representations for episodic few-shot classification [66]. The main difference with CIFAR-100 is that here we evaluate using the test classes of MiniImageNet, which were not part of the training set of the self-supervised models. Therefore, with this evaluation we analyze the ability of the representations to be used for learning with few training examples and for “unseen" classes during training. For comparison with this protocol we provide results of the supervised Cosine Classifier (CC) few-shot model [19, 55].

4.1.3 Results

Method 1 5 10 50 Linear
 Supervised [73] - - - - 79.5
 RotNet 58.3 74.8 78.3 81.9 60.3
 Deeper Clustering 65.9 84.6 87.9 90.8 65.4
AMDIM [5] - - - - 70.2
 BoWNet 69.1 86.3 89.2 92.4 71.5
 BoWNet 68.5 87.1 90.4 93.8 74.1
 BoWNet 68.4 87.2 90.4 93.9 74.5
 BoWNet w/o cutmix 68.5 85.8 88.8 92.2 69.7
 Sp-BoWNet 67.7 85.8 89.2 92.3 71.3
Table 1: CIFAR-100 linear classifier and few-shot results with WRN-28-10. For few-shot we use =1, 5, 10, or 50 examples per class. AMDIM uses a higher-capacity custom made architecture.
MethodClasses Novel Base
1 5 10 50 Linear
 Supervised CC [19] 56.8 74.1 78.1 82.7 73.7
 RotNet 40.8 56.9 61.8 68.1 52.3
 RelLoc [14] 40.2 57.1 62.6 68.8 50.4
 Deeper Clustering 47.8 66.6 72.1 78.4 60.3
 BoWNet 48.7 67.9 74.0 79.9 65.0
 BoWNet 49.1 67.6 73.6 79.9 65.6
 BoWNet 48.6 68.9 75.3 82.5 66.0
Table 2: MiniImageNet linear classifier and few-shot results with WRN-28-4.
MethodClasses Novel Base
1 5 10 50 Linear
 RotNet 40.8 56.9 61.8 68.1 52.3
 RelLoc 40.2 57.1 62.6 68.8 50.4
 RotNet BoWNet 48.7 67.9 74.0 79.9 65.0
 RelLoc BoWNet 51.8 70.7 75.9 81.3 65.2
 Random BoWNet 42.4 62.0 68.9 78.1 61.5
Table 3: MiniImageNet linear classifier and few-shot results with WRN-28-4. Impact of base convnet.

In Tables 1 and 2 we report results for our self-supervised method on the CIFAR-100 and MiniImageNet datasets respectively. By comparing BoWNet with RotNet (that we used for building BoW), we observe that BoWNet improves all the evaluation metrics by at least percentage points, which is a very large performance improvement. Applying BoWNet iteratively (entries BoWNet and BoWNet ) further improves the results (except the -shot accuracy). Also, BoWNet outperforms by a large margin the CIFAR-100 linear classification accuracy of the recently proposed AMDIM [5] method (see Table 1), which has been shown to achieve very strong results. Finally, the performance of the BoWNet representations on the MiniImageNet novel classes for the -shot and especially the -shot setting are very close to the that of the supervised CC model (see Table 2).

Impact of CutMix augmentation. In Table 1 we report CIFAR-100 results without CutMix, which confirms that employing CutMix does indeed provide some further improvement on the quality of the learned representations.

Spatial-Pyramid BoW [37]. By reducing the visual words descriptions to BoW histograms, we remove spatial information from the visual word representations. To avoid this, one could divide the image into several spatial grids of different resolutions, and then extract a BoW representation from each resulting image patch. In Table 1 we provide results for such a case (entry Sp-BoWNet). Specifically, we used 2 levels for the spatial pyramid, one with and one with resolution, giving in total BoW. Although one would expect otherwise, we observe that adding more spatial information to the BoW via Spatial-Pyramids-BoW, does not improve the quality of the learned representations.

Comparison with Deeper Clustering (DC) [9]. To compare our method against DC we implemented it using the same convnet backbone and the same pre-trained RotNet as for BoWNet. To be sure that we do not disadvantage DC in any way, we optimized the number of clusters (), the training routine444Specifically, we use training epoch (the clusters are updated every epochs), and a constant learning rate of 0.1 (same as in [9]). Each epoch consists of batch iterations with batch size . For simplicity we used one clustering level instead of the two hierarchical levels in [9]., and applied the same image augmentations (including cutmix) as in BoWNet. We also boosted DC by combining it with rotation prediction and applied our re-parametrization of the linear prediction layer (see equation (5)), which however did not make any difference in the DC case. We observe in Tables 1 and 2 that, although improving upon RotNet, DC has finally a significantly lower performance compared to BoWNet, e.g., several absolute percentage points lower linear classification accuracy, which illustrates the advantage of using BoW as targets for self-supervision instead of the single cluster id of an image.

Impact of base convnet. In Table 3 we provide MiniImageNet results using RelLoc [14] as the initial convnet with which we build BoW (base convnet). RelLoc BoWNet achieves equally strong or better results than in the RotNet BoWNet case. We also conducted (preliminary) experiments with a randomly initialized base convnet (entry Random BoWNet). In this case, to learn good representations, (a) we used in total 4 training rounds, (b) for the 1st round we built BoW from the 1st residual block of the randomly initialized WRN-28-4 and applied PCA analysis before k-means, (c) for the 2nd round we built BoW from 2nd residual block, and (d) for the remaining 2 rounds we built BoW from 3rd/last residual block. We observe that with a random base convnet the performance of BoWNet drops. However, BoWNet still is significantly better than RotNet and RelLoc.

We provide additional experimental results in §B.1.

4.2 Self-supervised training on ImageNet

Here we evaluate BoWNet by training it on the ImageNet dataset that consists of more than images coming from different classes. We use the ResNet-50 (v1) [27] architecture with input images for implementing the RotNet and BoWNet models. The BoWNet models are trained with 2 training rounds. For each round we use SGD, training epochs, and a learning rate that is multiplied by after , , and epochs. The batch size is and the weight decay is . To build BoW we use a vocabulary of visual words created from the 3rd or 4th residual blocks (aka conv4 and conv5 layers respectively) of RotNet (for an experimental analysis of those choices see §B.2). We named those two models BowNet conv4 and BowNet conv5 respectively.

We evaluate the quality of the learned representations on ImageNet classification, Places205 classification, VOC07 classification, and VOC07+12 detection tasks.

Method conv4 conv5
ImageNet Supervised [24] 80.4 88.0
RotNet 64.6 62.8
Jigsaw [24] 64.5 57.2
Colorization [24] 55.6 52.3
BoWNet conv4 73.6 79.3
BoWNet conv5 74.3 78.4
Table 4: VOC07 image classification results for ResNet-50 Linear SVMs. : our implementation.
ImageNet Places205
Method conv* pool5 conv* pool5
 Random [24] 13.7 - 16.6 -
 Supervised methods
 ImageNet [24] 75.5 - 51.5 -
 ImageNet 76.0 76.2 52.8 52.0
 Places205 [24] 58.2 - 62.3 -
 Prior self-supervised methods
 RotNet 52.5 40.6 45.0 39.4
 Jigsaw [24] 45.7 - 41.2 -
 Colorization [24] 39.6 - 31.3 -
LA [79] 60.2 - 50.1 -
 Concurrent work
 MoCo [25] - 60.6 - -
 PIRL [44] 63.6 - 49.8 -
CMC [62] - 64.1 - -
 BowNet conv4 62.5 62.1 50.9 51.1
 BowNet conv5 60.5 60.2 50.1 49.5
Table 5: ResNet-50 top-1 center-crop linear classification accuracy on ImageNet and Places205. pool5 indicates the accuracy for the -dimensional features produced by the global average pooling layer after conv5. conv* indicates the accuracy of the best (w.r.t. accuracy) conv. layer of ResNet-50 (for the full results see §B.3). Before applying classifiers on those conv. layers, we resize their feature maps to around dimensions (in the same way as in [24]). : LA [79] uses 10-crops evaluation. : CMC [62] uses two ResNet-50 feature extractor networks. : our implementation.
Supervised 80.8 58.5 53.2
Jigsaw [24] 75.1 52.9 48.9
PIRL [44] 80.7 59.7 54.0
MoCo [25] 81.4 61.2 55.2
BoWNet conv4 80.3 60.4 55.0
BoWNet conv5 81.3 61.1 55.8
Table 6: Object detection with Faster R-CNN fine-tuned on VOC trainval07+12. The detection AP scores (, , ) are computed on test07. All models use ResNet-50 backbone (R50-C4) pre-trained with self-supervision on ImageNet. BowNet scores are averaged over 3 trials. : our implementation fine-tuned in the same conditions as BoWNet. : BatchNorm layers are frozen and used as affine transformation layers.
VOC07 classification results.

For this evaluation we use the publicly available code for benchmarking self-supervised methods provided by Goyal et al[24]. [24] implements the guidelines of [50] and trains linear SVMs [6] on top of the frozen learned representations using the VOC07 train+val splits for training and the VOC07 test split for testing. We consider the features of the 3rd (layer conv4) and 4th (layer conv5) residual blocks and provide results in Table 4. Again, BoWNet improves the performance of the already strong RotNet by several points. Furthermore, BoWNet outperforms all prior methods. Interestingly, conv4-based BoW leads to better classification results for the conv5 layer of BoWNet, and conv5-based BoW leads to better classification results for the conv4 layer of BoWNet.

ImageNet and Places205 classification results.

Here we evaluate on the -way ImageNet and -way Places205 classification tasks using linear classifiers on frozen feature representations. To that end, we follow the guidelines of [24]: for the ImageNet (Places205) dataset we train linear classifiers using Nesterov SGD for () training epochs and a learning rate of that is multiplied by after () and () epochs. The batch size is and the weight decay is . We report results in Table 5. We observe that BoWNet outperforms all prior self-supervised methods by significant margin. Furthermore, the accuracy gap on Places205 between our ImageNet-trained BoWNet representations and the ImageNet-trained supervised representations is only points in pool5. This demonstrates that our self-supervised representations have almost the same generalization ability to the “unseen” (during training) Places205 classes as the supervised ones. We also compare against the MoCo [25] and PIRL [44] methods that were recently uploaded on arXiv and essentially are concurrent work. BoWNet outperforms MoCo on ImageNet. When compared to PIRL, BoWNet has around point higher Places205 accuracy but point lower ImageNet accuracy.

VOC detection results.

Here we evaluate the utility of our self-supervised method on a more complex downstream task: object detection. We follow the setup considered in prior works [24, 25, 44]: Faster R-CNN [57] with a ResNet50 backbone [26] (R50-C4 in Detectron2 [69]). We fine-tune the pre-trained BoWNet on trainval07+12 and evaluate on test07. We use the same training schedule as [24, 44] adapted for 8 GPUs and freeze the first two convolutional blocks. In detail, we use mini-batches of images per GPU and fine-tune for steps with the learning rate dropped by after steps. We set the base learning to with a linear warmup [23] of steps. We fine-tune BatchNorm layers [30] (synchronizing across GPUs) and use BatchNorm on newly added layers specific to this task.555He et al. [25] point out that features produced by self-supervised training can display different distributions compared to supervised ones and suggest using feature normalization to alleviate this problem.

We compare BoWNet conv4 and BoWNet conv5 against both classic and recent self-supervised methods and report results in Table 6. Both BoWNet variants exhibit strong performance. Differently from previous benchmarks, the conv5 is clearly better than conv4 on all metrics. This might be due to the fact that here we fine-tune multiple layers and depth plays a more significant role. Interestingly, BoWNet outperforms the supervised ImageNet pre-trained model, which is fine-tuned in the same conditions as BoWNet. So, our self-supervised representations generalize better to the VOC detection task than the supervised ones. This result is in line with concurrent works [25, 44] and underpins the utility of such methods in efficiently squeezing out information from data without using labels.

5 Conclusion

In this work we propose BoWNet, a novel method for representation learning that employs spatial dense descriptions based on visual words as targets for self-supervised training. The labels for training BoWNet are provided by a standard self-supervised model. The reconstruction of the BoW vectors from perturbed images along with the discretization of the output space into visual words, enable a more discriminative learning of the local visual patterns in an image. Interestingly, although BoWNet is trained over features learned without label supervision, not only it achieves strong performances, but it also manages to outperform the initial model. This finding along with the discretization of the feature space (into visual words) open additional perspectives and bridges to NLP self-supervised methods that have greatly benefited from this type of approaches in the past few years.


We would like to thank Gabriel de Marmiesse for his invaluable support during the experimental implementation and analysis of this work.


  • [1] J. Alayrac, J. Carreira, and A. Zisserman (2019) The visual centrifuge: model-free layered video representations. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 2457–2466. Cited by: §3.
  • [2] R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic (2016) NetVLAD: cnn architecture for weakly supervised place recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5297–5307. Cited by: §3.
  • [3] R. Arandjelovic and A. Zisserman (2017) Look, listen and learn. In Proceedings of the IEEE International Conference on Computer Vision, pp. 609–617. Cited by: §3.
  • [4] Y. M. Asano, C. Rupprecht, and A. Vedaldi (2019) Self-labelling via simultaneous clustering and representation learning. External Links: 1911.05371 Cited by: §3.1.
  • [5] P. Bachman, R. D. Hjelm, and W. Buchwalter (2019) Learning representations by maximizing mutual information across views. arXiv preprint arXiv:1906.00910. Cited by: §3, §3.1, §4.1.3, Table 1.
  • [6] B. E. Boser, I. M. Guyon, and V. N. Vapnik (1992) A training algorithm for optimal margin classifiers. In

    Proceedings of the fifth annual workshop on Computational learning theory

    pp. 144–152. Cited by: §4.2.
  • [7] W. Brendel and M. Bethge (2019) Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. In ICLR, Cited by: §1.
  • [8] M. Caron, P. Bojanowski, A. Joulin, and M. Douze (2018)

    Deep clustering for unsupervised learning of visual features

    In Proceedings of the European Conference on Computer Vision (ECCV), pp. 132–149. Cited by: §3.1, §3.1.
  • [9] M. Caron, P. Bojanowski, J. Mairal, and A. Joulin (2019) Unsupervised pre-training of image features on non-curated data. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2959–2968. Cited by: §3.1, §3.1, §4.1.3, footnote 4.
  • [10] T. Chen, X. Zhai, M. Ritter, M. Lucic, and N. Houlsby (2018) Self-supervised generative adversarial networks. arXiv preprint arXiv:1811.11212. Cited by: §1.
  • [11] G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray (2004) Visual categorization with bags of keypoints. In Workshop on statistical learning in computer vision, ECCV, Vol. 1, pp. 1–2. Cited by: §3.
  • [12] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1.
  • [13] T. v. Dijk and G. d. Croon (2019) How do neural networks see depth in single images?. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2183–2191. Cited by: §3.
  • [14] C. Doersch, A. Gupta, and A. A. Efros (2015) Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1422–1430. Cited by: §1, §3, §4.1.3, Table 2.
  • [15] A. Dosovitskiy, P. Fischer, J. T. Springenberg, M. Riedmiller, and T. Brox (2015) Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence 38 (9), pp. 1734–1747. Cited by: §3, §3.1.
  • [16] D. Eigen, C. Puhrsch, and R. Fergus (2014) Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems, pp. 2366–2374. Cited by: §3.
  • [17] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman (2010) The pascal visual object classes (voc) challenge. IJCV 88 (2), pp. 303–338. Cited by: §4.
  • [18] S. Gidaris, A. Bursuc, N. Komodakis, P. Pérez, and M. Cord (2019) Boosting few-shot visual learning with self-supervision. arXiv preprint arXiv:1906.05186. Cited by: §1, §4.1.2.
  • [19] S. Gidaris and N. Komodakis (2018) Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367–4375. Cited by: §4.1.2, Table 2.
  • [20] S. Gidaris, P. Singh, and N. Komodakis (2018) Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations, Cited by: §C.1, §1, §1, §2, §4.1.1.
  • [21] C. Godard, O. Mac Aodha, and G. J. Brostow (2017)

    Unsupervised monocular depth estimation with left-right consistency

    In CVPR, Cited by: §3.
  • [22] Y. Gong, L. Wang, R. Guo, and S. Lazebnik (2014) Multi-scale orderless pooling of deep convolutional activation features. In European conference on computer vision, pp. 392–407. Cited by: §3.
  • [23] P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He (2017) Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677. Cited by: §4.2.
  • [24] P. Goyal, D. Mahajan, A. Gupta, and I. Misra (2019) Scaling and benchmarking self-supervised visual representation learning. arXiv preprint arXiv:1905.01235. Cited by: Table 10, §4.2, §4.2, §4.2, Table 4, Table 5, Table 6.
  • [25] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick (2019) Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722. Cited by: Table 10, §3.1, §4.2, §4.2, §4.2, Table 5, Table 6, footnote 5.
  • [26] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969. Cited by: §4.2.
  • [27] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §4.2.
  • [28] O. J. Hénaff, A. Razavi, C. Doersch, S. Eslami, and A. v. d. Oord (2019) Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272. Cited by: §1.
  • [29] D. Hendrycks, M. Mazeika, S. Kadavath, and D. Song (2019) Using self-supervised learning can improve model robustness and uncertainty. arXiv preprint arXiv:1906.12340. Cited by: §1.
  • [30] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §4.2.
  • [31] H. Jégou, M. Douze, C. Schmid, and P. Pérez (2010) Aggregating local descriptors into a compact image representation. In CVPR 2010-23rd IEEE Conference on Computer Vision & Pattern Recognition, pp. 3304–3311. Cited by: §3.
  • [32] H. Jégou, M. Douze, and C. Schmid (2009) On the burstiness of visual elements. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1169–1176. Cited by: §B.1.
  • [33] H. Jégou, M. Douze, and C. Schmid (2009) Packing bag-of-features. In 2009 IEEE 12th International Conference on Computer Vision, pp. 2357–2364. Cited by: §B.1, footnote 2.
  • [34] R. Kiros, Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler (2015) Skip-thought vectors. In Advances in neural information processing systems, pp. 3294–3302. Cited by: §3.
  • [35] A. Kolesnikov, X. Zhai, and L. Beyer (2019) Revisiting self-supervised visual representation learning. arXiv preprint arXiv:1901.09005. Cited by: §2.
  • [36] G. Larsson, M. Maire, and G. Shakhnarovich (2016) Learning representations for automatic colorization. In ECCV, Cited by: §1, §1, §3.
  • [37] S. Lazebnik, C. Schmid, and J. Ponce (2006) Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), Vol. 2, pp. 2169–2178. Cited by: §4.1.3.
  • [38] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §1.
  • [39] H. Lee, J. Huang, M. Singh, and M. Yang (2017) Unsupervised representation learning by sorting sequences. In ICCV, Cited by: §3.
  • [40] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §1.
  • [41] D. G. Lowe (2004) Distinctive image features from scale-invariant keypoints. International journal of computer vision 60 (2), pp. 91–110. Cited by: §3.
  • [42] L. v. d. Maaten and G. Hinton (2008) Visualizing data using t-sne.

    Journal of Machine Learning Research

    9 (Nov), pp. 2579–2605.
    Cited by: §A.2, Figure 2.
  • [43] T. Mikolov, K. Chen, G. Corrado, and J. Dean (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Cited by: §3.
  • [44] I. Misra and L. van der Maaten (2019) Self-supervised learning of pretext-invariant representations. arXiv preprint arXiv:1912.01991. Cited by: Table 10, §3.1, §4.2, §4.2, §4.2, Table 5, Table 6.
  • [45] I. Misra, C. L. Zitnick, and M. Hebert (2016) Shuffle and learn: unsupervised learning using temporal order verification. In ECCV, Cited by: §3.
  • [46] E. Mohedano, K. McGuinness, N. E. O’Connor, A. Salvador, F. Marques, and X. Giro-i-Nieto (2016) Bags of local convolutional features for scalable instance search. In Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, pp. 327–331. Cited by: §3.
  • [47] M. Noroozi and P. Favaro (2016) Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, Cited by: §1, §3.
  • [48] A. v. d. Oord, Y. Li, and O. Vinyals (2018) Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Cited by: §1.
  • [49] A. v. d. Oord, O. Vinyals, and K. Kavukcuoglu (2017) Neural discrete representation learning. arXiv preprint arXiv:1711.00937. Cited by: §3.
  • [50] A. Owens, J. Wu, J. H. McDermott, W. T. Freeman, and A. Torralba (2016) Ambient sound provides supervision for visual learning. In European conference on computer vision, pp. 801–816. Cited by: §4.2.
  • [51] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros (2016) Context encoders: feature learning by inpainting. In CVPR, pp. 2536–2544. Cited by: §1, §1, §3.
  • [52] F. Perronnin and C. Dance (2007) Fisher kernels on visual vocabularies for image categorization. In 2007 IEEE conference on computer vision and pattern recognition, pp. 1–8. Cited by: §3.
  • [53] F. Perronnin, J. Sánchez, and T. Mensink (2010) Improving the fisher kernel for large-scale image classification. In European conference on computer vision, pp. 143–156. Cited by: §B.1.
  • [54] S. Pillai, R. Ambruş, and A. Gaidon (2019) Superdepth: self-supervised, super-resolved monocular depth estimation. In 2019 International Conference on Robotics and Automation (ICRA), pp. 9250–9256. Cited by: §3.
  • [55] H. Qi, M. Brown, and D. G. Lowe (2017) Learning with imprinted weights. arXiv preprint arXiv:1712.07136. Cited by: §4.1.2.
  • [56] A. Razavi, A. van den Oord, and O. Vinyals (2019) Generating diverse high-fidelity images with vq-vae-2. In Advances in Neural Information Processing Systems, pp. 14837–14847. Cited by: §3.
  • [57] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §4.2.
  • [58] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. (2015) Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115 (3), pp. 211–252. Cited by: §4.
  • [59] J. Sivic and A. Zisserman (2006) Video google: efficient visual search of videos. In Toward category-level object recognition, pp. 127–144. Cited by: §B.1, §3, footnote 2.
  • [60] J. Snell, K. Swersky, and R. S. Zemel (2017) Prototypical networks for few-shot learning. arXiv preprint arXiv:1703.05175. Cited by: §4.1.2.
  • [61] J. Su, S. Maji, and B. Hariharan (2019) When does self-supervision improve few-shot learning?. arXiv preprint arXiv:1910.03560. Cited by: §1.
  • [62] Y. Tian, D. Krishnan, and P. Isola (2019) Contrastive multiview coding. arXiv preprint arXiv:1906.05849. Cited by: Table 10, Table 5.
  • [63] G. Tolias, Y. Avrithis, and H. Jégou (2013) To aggregate or not to aggregate: selective match kernels for image search. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1401–1408. Cited by: §3.
  • [64] T. H. Trinh, M. Luong, and Q. V. Le (2019) Selfie: self-supervised pretraining for image embedding. arXiv preprint arXiv:1906.02940. Cited by: §1.
  • [65] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol (2008) Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pp. 1096–1103. Cited by: §1, §3.
  • [66] O. Vinyals, C. Blundell, T. Lillicrap, and D. Wierstra (2016) Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pp. 3630–3638. Cited by: §4.1.2, §4.1.2, §4.
  • [67] C. Vondrick, A. Shrivastava, A. Fathi, S. Guadarrama, and K. Murphy (2018) Tracking emerges by colorizing videos. In ECCV, Cited by: §3.
  • [68] D. Wei, J. J. Lim, A. Zisserman, and W. T. Freeman (2018) Learning and using the arrow of time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8052–8060. Cited by: §3.
  • [69] Y. Wu, A. Kirillov, F. Massa, W. Lo, and R. Girshick (2019) Detectron2. Cited by: §4.2.
  • [70] J. Yang, Y. Jiang, A. G. Hauptmann, and C. Ngo (2007)

    Evaluating bag-of-visual-words representations in scene classification

    In Proceedings of the international workshop on Workshop on multimedia information retrieval, pp. 197–206. Cited by: §1.
  • [71] J. Yue-Hei Ng, F. Yang, and L. S. Davis (2015) Exploiting local features from deep networks for image retrieval. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 53–61. Cited by: §3.
  • [72] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo (2019) Cutmix: regularization strategy to train strong classifiers with localizable features. arXiv preprint arXiv:1905.04899. Cited by: §2.3.
  • [73] S. Zagoruyko and N. Komodakis (2016) Wide residual networks. In Proc. British Machine Vision Conference, Cited by: §B.1, §4.1.1, Table 1.
  • [74] X. Zhai, A. Oliver, A. Kolesnikov, and L. Beyer (2019) Sl: self-supervised semi-supervised learning. arXiv preprint arXiv:1905.03670. Cited by: §1.
  • [75] R. Zhang, P. Isola, and A. A. Efros (2016) Colorful image colorization. In ECCV, Cited by: §1, §1, §3, §3.
  • [76] R. Zhang, P. Isola, and A. A. Efros (2017) Split-brain autoencoders: unsupervised learning by cross-channel prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1058–1067. Cited by: §1, §3.
  • [77] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva (2014)

    Learning deep features for scene recognition using places database

    In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.), pp. 487–495. Cited by: §4.
  • [78] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe (2017) Unsupervised learning of depth and ego-motion from video. In CVPR, Cited by: §3, §3.
  • [79] C. Zhuang, A. L. Zhai, and D. Yamins (2019) Local aggregation for unsupervised learning of visual embeddings. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6002–6012. Cited by: Table 10, Table 5.


Figure 2: t-SNE [42] scatter plot of the learnt self-supervised features on CIFAR-100.

Each data point in the t-SNE scatter plot corresponds to the self-supervised feature representation of an image from CIFAR-100 and is colored according to the class that this image belongs to. To reduce clutter, we visualize the features extracted from the images of 20 randomly selected classes of CIFAR-100.

Figure 3: Examples of visual word cluster members. The clusters are created by applying k-means on the feature maps produced from the conv4 layer of the ResNet50. For each visual word cluster we depict the 16 patch members with the smallest Euclidean distance to the visual word cluster centroid.

Appendix A Visualizations

a.1 Visualizing the word clusters

In Figure 3 we illustrate visual words used for training our self-supervised method on ImageNet. Since we discover visual words using k-means, to visualize a visual word we depict the 16 patches with the smallest Euclidean distance to the visual word cluster centroid. As can be noticed, visual words encode mid-to-higher-level visual concepts.

a.2 t-SNE scatter plots of the learnt self-supervised features

In Figure 2 we visualize the t-SNE [42] scatter plot of the self-supervised features obtained when applying our method to the CIFAR-100 dataset. For visualizations purposes, we only plot the features corresponding to images that belong to 20 (randomly selected) classes out of the 100 classes of CIFAR-100. As can be clearly observed, the learnt features form class-specific clusters, which indicates that they capture semantic image information.

Models Linear
 BoWNet 69.76
 BoWNet 70.43
 BoWNet 71.01
 BoWNet 70.99
Table 7: CIFAR-100 linear classifier results with WRN-28-10. Impact of vocabulary size. Here we used an initial version of our method implemented with less aggressive augmentation techniques.
Method 1 5 10 50 Linear
 RotNet 58.3 74.8 78.3 81.9 60.3
 BoWNet 69.1 86.3 89.2 92.4 71.5
 Additional ablations
 BoWNet - predict 61.7 80.0 83.4 87.4 61.3
 BoWNet - linear 70.4 85.6 88.3 90.7 63.3
 BoWNet - binary 70.1 86.8 89.5 92.7 71.4
Table 8: CIFAR-100 linear classifier and few-shot results. For these results we use the WRN-28-10.

Appendix B Additional experimental analysis

b.1 CIFAR-100 results

Here we provide an additional ablation analysis of our method on the CIFAR-100 dataset. As in section §4.1, we use the WRN-28-10 [73] architecture.

Impact of vocabulary size. In Table 7 we report linear classification results for different vocabulary sizes . We see that increasing the vocabulary size from to in CIFAR-100, offers obvious performance improvements. Then, from to , there is no additional improvement.

Reparametrized linear layer for .

In section §2.3, we described the linear-plus-softmax prediction layer implemented with a reparametrized version of the standard linear layer. In this reparametrized version, instead of directly applying the weight vectors to a feature vector , we first -normalize the weight vectors, and then apply a unique learnable magnitude for all the weight vectors (see equation (5) of main paper). The goal is to avoid always favoring the most frequently occurring words in the dataset by letting the linear layer of learn a different magnitude for the weight vector of each word (which is what happens in the case of the standard linear layer). In terms of effect over the weight vector, this reparametrization is similar with the power-law normalization [53] used for mitigating the burstiness effect on BoW-like representations [32]. Here, we examine the impact of the chosen reparametrization by providing in Table 8 results for the case of implementing with a standard linear layer (entry BoWNet - linear). We see that with the standard linear layer the performance of the BoWNet model deteriorates, especially on the linear classification metric, which validates our design of the layer.

Predicting instead of .

In our work, given a perturbed image , we train a convnet to predict the BoW representation of the original image . The purpose of predicting the BoW of the original image instead of the perturbed one , is to force the convnet to learn perturbation-invariant and context-aware features. We examine the impact of this choice by providing in Table 8 results for when the convnet is trained to predict the BoW of the perturbed image (entry BoWNet - predict ). As expected, in this case there is a significant drop in the BoWNet performance.

Histogram BoW vs Binary BoW

In section §2.2 we describe two ways for reducing the visual word description of an image to a BoW representation. Those are, (1) to count the number of times each word appears in the image (see equation (3)), called Histogram BoW, and (2) to just indicate for each word whether it appears in the image (see equation (4)) [59, 33], called Binary BoW. In Table 8 we provide evaluation results with both the histogram version (entry BoWNet) and the binary version (entry BoWNet - binary). We see that they achieve very similar linear classification and few-shot performance.

b.2 Small-scale experiments on ImageNet

BoW from linear cls Vocabulary linear cls
conv3 42.08 45.38
conv4 45.38 46.03
conv5 40.37 46.45
 Vocabulary size  BoW from conv4
Table 9: ResNet18 small-scale experiments on ImageNet. Linear classifier results. The accuracy of the RotNet model used for building the BoW representations is 37.61. The left section explores the impact of the layer (of RotNet) that we use for building the BoW representations (with ). The right section explores the impact of the vocabulary size (with BoW from the conv4).

Here we provide an additional experimental analysis of our method on the ImageNet dataset. Specifically, we study the impact of the feature block of the base convnet and the vocabulary size that are used for building the BoW representation. Due to the computationally intensive nature of ImageNet, we analyze those aspects of our method by performing “small-scale” ImageNet experiments. By “small-scale” we mean that we use the light-weight ResNet18 architecture and we train using only of ImageNet training images, and for few epochs.

Implementation details. We train the self-supervised models with SGD for epochs. The learning rate is initialized at and dropped by a factor of after , , and epochs. The batch size is and weight decay .

Evaluation protocols. We evaluate the learned self-supervised representations by freezing them and then training on top of them -way linear classifiers for the ImageNet classification task. The linear classifier is applied on top the feature map of the last residual block of ResNet18, resized to with adaptive average pooling. It is trained with SGD for epochs using a learning rate of that is dropped by a factor of every epochs. The weight decay is .

Results. We report results in Table 9. First, we study the impact on the quality of the learned representations of the RotNet feature block that is used for building the BoW representation. In the left section of Table 9 we report results for the cases of (a) conv3 (2nd residual block), (b) conv4 (3rd residual block), and conv5 (4th residual block). We see that the best performance is for the conv4-based BoW. Furthermore, in the right section of Table 9 we examine the impact of the vocabulary size on the quality of the learned representations. We see that increasing the vocabulary size from to leads to significant improvement for the linear classifier. In contrast, in Table 7 with results on CIFAR-100, we saw that increasing the vocabulary size after does not improve the quality of the learned representations. Therefore, it seems that the optimal vocabulary size depends on the complexity of the dataset to which we apply the BoW prediction task.

b.3 Full ImageNet and Places205 classification results

ImageNet Places205
Method conv2 conv3 conv4 conv5 pool5 conv2 conv3 conv4 conv5 pool5
 Random [24] 13.7 12.0 8.0 5.6 - 16.6 15.5 11.6 9.0 -
 Supervised methods
 ImageNet [24] 33.3 48.7 67.9 75.5 - 32.6 42.1 50.8 51.5 -
 ImageNet 32.8 47.0 67.2 76.0 76.2 35.2 42.6 50.9 52.8 52.0
 Places205 [24] 31.7 46.0 58.2 51.7 - 32.3 43.2 54.7 62.3 -
 Prior self-supervised methods
 RotNet 30.1 42.0 52.5 46.2 40.6 32.9 40.1 45.0 42.0 39.4
 Jigsaw [24] 28.0 39.9 45.7 34.2 - 28.8 36.8 41.2 34.4 -
 Colorization [24] 24.1 31.4 39.6 35.2 - 28.4 30.2 31.3 30.4 -
LA [79] 23.3 39.3 49.0 60.2 - 26.4 39.9 47.2 50.1 -
 Concurrent work
 MoCo [25] - - - - 60.6 - - - - -
 PIRL [44] 30.0 40.1 56.6 63.6 - 29.0 35.8 45.3 49.8 -
CMC [62] - - - - 64.1 - - - - -
 BowNet conv4 34.4 48.7 60.0 62.5 62.1 36.7 44.7 50.5 50.9 51.1
 BowNet conv5 34.2 49.1 60.5 60.4 60.2 36.9 44.7 50.1 49.6 49.5
Table 10: ResNet-50 top-1 center-crop linear classification accuracy on ImageNet and Places205. For the conv2-conv5 layers of ResNet-50, to evaluate linear classifiers, we resize their feature maps to around dimensions (in the same way as in [24]). pool5 indicates the accuracy of the linear classifier trained on the -dimensional feature vectors produced by the global average pooling layer after conv5. : LA [79] uses 10 crops for evaluation. : CMC [62] uses two ResNet-50 feature extractor networks instead of just one. : our implementation.

In Table 10 we provide the full experimental results of our method on the ImageNet and Places205 classification datasets.

Appendix C Implementation details

c.1 Implementing RotNet

For the implementation of the rotation prediction network, RotNet, we follow the description and settings from Gidaris et al. [20]. RotNet is composed of a feature extractor and a rotation prediction module. The rotation prediction module gets as input the output feature maps of and is implemented as a convnet. It consists of a block of residual layers followed by global average pooling and a fully connected classification layer. In the CIFAR-100 experiments where is implemented with a WRN-28-10 architecture, the residual block of the rotation prediction module has 4 residual layers (similar to the last residual block of WRN-28-10), with feature channels as input and output. In the MiniImageNet experiments where is implemented with a WRN-28-4 architecture, the residual block of the rotation prediction module has again 4 residual layers, but with feature channels as input and output. Finally, in ImageNet experiments with ResNet50, the residual block of the rotation prediction module has 1 residual layer with feature channels as input and output.

During training for each image of a mini-batch, we generate its four rotated copies (, , , and D rotations) and predict the rotation class of each copy. For supervision we use the cross-entropy loss over the four rotation classes.

After training we discard the rotation prediction module and consider only the feature extractor for the next stages, i.e. spatially dense descriptions and visual words.

c.2 Building BoW for self-supervised training

For the ImageNet experiments, given an image, we build its target BoW representation using visual words extracted from both the original and the horizontally flipped version of the image. Also, for faster training we pre-cache the BoW representations. Finally, in all experiments, when computing the target BoW representation we ignore the visual words that correspond to the feature vectors on the edge of the feature maps.

c.3 Few-shot protocol

The typical pipeline in few-shot learning is to first train a model on a set of base classes and then to evaluate it on a different set of novel classes (each set of classes is split into train and validation subsets). For MiniImageNet experiments we use this protocol, as this dataset has three different splits of classes: training classes, for validation, and for test. For the few-shot experiments on CIFAR-100 we do not have such splits of classes and we adjust this protocol by selecting a subset of classes and sample from the corresponding test images for evaluation. In this case, the feature extractor is trained in a self-supervised manner on train images from all classes of CIFAR-100.

Few-shot models are evaluated over a large number of few-shot tasks: we consider here tasks. The few-shot evaluation tasks are formed by first sampling categories from the set of novel/evaluation classes and then selecting randomly training samples and test samples per category. The classification performance is measured on the test images and is averaged over all sampled few-shot tasks. For few-shot experiments we use , , .