1 Introduction
One of the current main bottlenecks in Deep Network training is the dependence on large annotated training datasets, and this motivates the recent surge of interest in unsupervised methods. Specifically, in selfsupervised representation learning, a network is (pre)trained without any form of manual annotation, thus providing a means to extract information from unlabeleddata sources (e.g., text corpora, videos, images from the Internet, etc.). In selfsupervision, label information is replaced by asking the network to make predictions using some form of context or using a pretext
task. Pioneering work in this direction was done in Natural Language Processing (NLP), in which the cooccurrence of words in a sentence is used to learn a language model
[28, 29, 8]. In Computer Vision, typical contexts or pretext tasks are based on: (1) the temporal consistency in videos
[42, 31, 14], (2) the spatial order of patches in still images [32, 30, 21] or (3) simple image transformation techniques [26, 19, 43]. The intuitive idea behind most of these methods is to collect pairs of positive and negative samples: Two positive samples should share the same semantics, while negatives should be perceptually different. A triplet loss [38, 36, 22, 42, 31] can then be used to learn a metric space which should represent the human perceptual similarity. However, most of the recent studies use a contrastive loss [17] or one of its variants [16, 40, 23], while in [39] the authors show the relation between the triplet loss and the contrastive loss.It is worth noticing that the success of both kinds of losses is strongly affected by the number and the quality of the negative samples. For instance, in the case of the triplet loss, a common practice is to select hard/semihard negatives [36, 22]. On the other hand, recent works have shown that the contrastive loss needs a large number of negatives to be competitive [23]. However, this implies using batches with a large size, which is computationally demanding, especially with highresolution images. In order to alleviate this problem, Wu et al. [43] use a memory bank
of negatives, which is composed of featurevector representations of all the training samples. He et al.
[19] conjecture that the use of large and fixedrepresentation vocabularies is one of the key of the success of selfsupervision in NLP [8, 29]. The solution proposed in [19] extends [43] using a memoryefficient queue of the last visited negatives, together with a momentum encoder which preserves the intraqueue representation consistency. However, the raw images used in Vision are much more variable than the fixed symbols (e.g, the vocabulary words) used in NLP. As a consequence, when an image is represented by the encoder part of the trained network, this representation drastically changes during training, thus making hard the comparison with image samples observed far in time.In this paper we propose a different direction and a new selfsupervised loss function which first scatters all the sample representations in a spherical distribution^{1}^{1}1
Here and in the following, with “spherical distribution” we mean a distribution with a zeromean and an identitymatrix covariance.
and then penalizes the positive pairs which are far from each other. In more detail, given a set of samples , corresponding to the current minibatch of images , we first project the elements of onto a spherical distribution using a whitening transform [37]. The whitened representations , corresponding to , are used to compute a standard Mean Squared Error (MSE) loss which accumulates the error taking into account only positive pairs . We do not need to contrast positives against negatives as in the contrastive loss or in the triplet loss because the optimization process leads to shrink the distance between positive pairs and, indirectly, scatters the other samples in order to satisfy the overall sphericaldistribution constraint.We empirically show that our Whitening MSE (WMSE) loss outperforms the commonly adopted contrastive loss when measured using different standard classification protocols. Additionally, we show that WMSE brings complementary information with respect to a standard contrastive loss and can be combined with the latter using multiple projection heads. Specifically, given an encoder , which extracts a representation vector from an image
, we use two simple and separated MultiLayer Perceptrons (MLPs),
and , which project the common representation onto two different latentspace representations. The projection heads and work collaboratively and they are trained with two different losses, using our WMSE for the first and a (suitably normalized) contrastive loss for the second. We test the full method, which we call Collaborative Projections With Whitening (), on common selfsupervised benchmarks, significantly outperforming other unsupervised methods which use similarcapacity backbone networks. , tested with a finetuningbased protocol on STL10 [4], establishes a new stateoftheart result on this dataset.In summary, our contributions are the following:

We propose a new loss function (WMSE) for selfsupervised training. WMSE constrains the batch samples to lie in a spherical distribution and it is an alternative to positivenegative instance contrasting methods.

Differently from most of previous work in which only one loss function is used, we can combine our WMSE with other common losses. Specifically, we propose two different nonliner projection heads and the use of WMSE and the contrastive loss in the two corresponding latent spaces.

We show that WMSE outperforms other loss functions and that is competitive with respect to stateoftheart selfsupervised methods.
2 Background and Related Work
A typical selfsupervised method is composed of two main components: a pretext task, which exploits some apriori knowledge about the domain in order to automatically extract supervision from data, and a loss function. In this section we briefly review both aspects, and we additionally analyse the recent literature concerning feature whitening.
Pretext Tasks. The temporal consistency in a video provides an intuitive form of selfsupervision: temporallyclose frames usually contain a similar semantic content [42, 40]. In [31] this idea is extended using the relative temporal order of 3 frames, while in [14] selfsupervision is given by a temporal cycle consistency, which is based on comparing two videos sharing the same semantics and computing intervideo frametoframe nearest neighbour assignments.
When dealing with still images, the most common pretext task is instance discrimination [43]: an input image is transformed into using a (composition of) dataaugmentation technique(s), such as image cropping, rotation, color jittering, Sobel filtering, etc., and then the learner is required to discriminate from other samples [26, 19, 43].
Denoising autoencoders [41] add random noise to the input image and try to recover the original image. More sophisticated pretext tasks consist in predicting the spatial order of image patches [32, 30] or in reconstructing large masked regions of the image [34]. In [23, 1] the holistic representation of an input image is compared with a patch of the same image. A similar idea is used in [21], where the comparison depends on the patch order: the appearance of a given patch should be predicted given the appearance of the patches which lie above it in the image.
In this paper we use standard data augmentation techniques on still images to obtain positive pairs, which is a simple method to get selfsupervision [26, 19, 43] and does not require a pretexttask specific network architecture [23, 1, 21].
Loss functions. Denoising autoencoders [41] use a reconstruction loss which compares the generated image with the input image before adding noise. Other generative methods use an adversarial loss in which a discriminator provides supervisory information to the generator [10, 11].
Early selfsupervised (deep) discriminative methods used a triplet loss [42, 31]: Given two positive images and a negative (Sec. 1), together with their corresponding latentspace representations , this loss penalizes those cases in which and are closer to each other than and plus a margin :
(1) 
Most of the recent selfsupervised discriminative methods are based on some contrastive loss [17] variant, in which and are contrasted against a set of negative pairs. Following the common formulation proposed in [40]:
(2) 
where is a temperature hyparameter which should be manually set and the sum in the denominator is over a set of negative samples. Usually is the size of the current batch, i.e., , being the number of the collected positive pairs. However, as shown in [23], the InfoNCE loss requires a large number of negative samples to be competitive. In [43, 19] a set of negatives much larger than the current batch is used, by using precomputed latentspace representations of old samples.
In this paper we propose a different loss which is highly competitive with respect to other alternatives and does not require a large number of samples. Moreover, our loss formulation is also simpler since it does not require a proper setting of the hyperparameter in Eq. 2 or in Eq. 1. Finally, while many recent works [40, 21, 23, 1, 35]
draw a relation between the contrastive loss and an estimate of the mutual information between latentspace image representations, Tschannen et al.
[39] argue that the success of this loss is likely related to learning a metric space, similarly to what happens with a triplet loss.Feature Whitening. We adopt the efficient and stable Cholesky decomposition [7] based whitening transform proposed in [37] to project our latentspace vectors into a spherical distribution (more details in Sec. 3). Note that in [24, 37]
whitening transforms are used in the intermediate layers of the network for a completely different task: extending Batch Normalization
[25] to a multivariate batch normalization.3 The Whitening MSE Loss
Given an image , we extract an embedding using a network parametrized with (more details below). We require that: (1) the image embeddings are drawn from a nondegenerate distribution (the latter being a distribution where, e.g., all the representations collapse to a single point), and (2) positive image pairs , which share a similar semantics, should be clustered close to each other. We formulate this problem as follows:
(3) 
(4) 
where is the identity matrix and correspond to a positive pair of images . With Eq. (4), we constrain the distribution of the
values to be nondegenerate, hence avoiding that all the probability mass is concentrated in a single point. Moreover, Eq. (
4) makes all the components of to be linearly independent from each other, which encourages the different dimensions of to represent different semantic content. We provide below the details on how positive image samples are collected, how they are encoded and how Eq. (3)(4) are implemented.First, similarly to [26, 19, 43], we obtain a pair of positive images sharing the same semantics starting from a single image and using standard image transformation techniques. Specifically, we use a composition of image cropping and color jittering transformations , whose parameters () are selected uniformly at random and independently of each other in order to obtain a pair of positive samples from the same image: and (see Fig 3 for some examples). We concisely indicate with the fact that and (, the current batch) are matched to each other because they share the same semantics.
For representation learning, we use a backbone encoder network . , trained without human supervision, will be used in Sec. 5 for evaluation using standard protocols. Similarly to [19], we use a standard ResNet [20] as the encoder, and is the output of the averagepooling layer. This choice has the advantage to be simple and easy to be reproduced, in contrast to other methods who use encoder architectures specific for a given pretext task [23, 1, 21]. Since is a highdimensional vector, following [23, 1] we use a nonlinear projection head to project in a lower dimensional space: , where is implemented with a simple MLP with one hidden layer. The whole network is given by the composition of with (see Fig. 2 (a)).
Given positive pairs and a batch of images , where , let , be the corresponding batch of features obtained as described above. The proposed WMSE loss is obtained using the Mean Squared Error computed over the pairs, where constraint (4) is satisfied using the reparameterization of the variables with whitened variables :
(5) 
where , and:
(6) 
In Eq. (6), is the mean of the elements in :
(7) 
while the matrix is such that: , being the covariance matrix of :
(8) 
For more details on the whitening transform, we refer to [37]. This transformation performs the full whitening of each [37] and the resulting set of vectors lies in a zerocentered distribution with a covariance matrix equal to the identity matrix (Fig. 1).
The intuition behind the proposed loss is that Eq. (5) penalizes positives which are far apart from each other, thus leading to shrink the interpositive distances. On the other hand, since must lie in a spherical distribution, the other samples should be “moved” and rearranged in order to satisfy constraint (4) (see Fig. 1).
Batch Slicing. The estimation of the Mean Square Error in Eq. (5) depends on the whitening matrix
, which may have a high variance over consecutive iteration batches
. For this reason, inspired by the resampling methods [15], at each iteration, given a batch , we randomly slice in several nonoverlapping subbatches and we compute the whitening matrix independently for each subbatch. We repeat this random slicing four times in order to get a more robust estimate of Eq. (5).3.1 Discussion
In a common instancediscrimination task (Sec. 2), e.g., solved using Eq. (2), the similarity of a positive pair () is contrasted with the similarity computed with respect to all the other samples () in the batch (, ). However, and , extracted from different image instances, can occasionally share the same semantics (e.g., and are two different image instances of the “cat” class). Conversely, the proposed WMSE loss does not force all the instance samples to lie far from each other, but it only imposes a soft constraint (Eq. (4)), which avoids degenerate distributions.
Note that previous work [19, 21] highlighted that Batch Normalization (BN) [25] may be harmful for learning semantically meaningful representations because the network can “cheat” and exploit the batch statistics in order to find a trivial solution to Eq. (2). However, our whitening transform (Eq. (6)) is applied only to the very last layer of the network (see Fig. 2) and it is not used in the intermediate layers, which is instead the case of BN. Hence, our cannot learn to exploit subtle intersample dependencies introduced by batchstatistics because of the lack of other learnable layers on top of the features.
4 MultipleHead Projections
Our WMSE loss can be used in conjunction with different losses in order to increase the selfsupervision signal provided to the encoder . In our experiments we used the InfoNCE loss (Eq. 2) but other losses (e.g., the triplet loss, Eq. 1) may be used as well. Specifically, the output of the encoder, , is fed to a second projection head
, an MLP with the same number of layers and neurons of
. Using , we obtain: . Note that the encoder is shared over the two heads (see Fig. 2 (b)). Before applying the InfoNCE loss, we normalize all the elements in : for each , . This normalization is used also in [19], and it is important to make the average magnitude of the gradients backpropagated from
and to roughly similar to each other. In Sec. 5.1.1 we show that the normalized InfoNCE loss, when used in isolation, is much better than its unnormilzed version. It is worth noticing that the normalization is perelement: differently from whitening (Eq. 6), depends only on .Finally, the elements in are used to compute Eq. 2, and the gradients of the two losses are merged in with equal relative weight. The dimensions of the and the embeddings are the same and the intermediate layer in both the MLP heads has the same dimension of the input .
(a) 
(b) 
5 Experiments
We test our method and its variants on the following datasets.

CIFAR10 and CIFAR100 [27], two smallscale datasets composed of images with 10 and 100 classes, respectively.

Tiny ImageNet [27], a reduced version of ImageNet, composed of 200 classes with images scaled down to . The total number of images is: 100K (training) and 10K (testing).

STL10 [4], also derived from ImageNet, with resolution images. While CIFAR10, CIFAR100, Tiny ImageNet and ImageNet are fullylabeled, STL10 is composed of 5K labeled training samples (500 per class) and 100K unlabeled training examples of irrelevant or distractor classes. There are additional 8K labeled testing images.
Setting. For a fair comparison, we split all the experiments according to the capacity of the encoder networks we compare with. Specifically, for our encoder , we use ResNet18, ResNet34, ResNet50 or AlexNet with about 11M, 21M, 24M, 58M parameters each. Unless otherwise specified, the results of the other stateoftheart methods we report are based on the same backbone networks or on networks with roughly the same capacity. InfoNCE and Normalized InfoNCE refer to our reproduction of contrastive loss variants which are based on the encoder , followed by a single projection head (), with or without feature normalization (more details below).
Encoder  Epochs  Learning rate  LR drop 

ResNet18  200    
ResNet34  500  
ResNet50  1000  
AlexNet  500 
We use the Adam optimizer for CIFAR10, CIFAR100, Tiny ImageNet and STL10. The number of epochs, the learning rate and the drop are presented in Tab. 1. We use a minibatch size of pairs ( samples). Finally, we use an embedding size of 32 for CIFAR10 and CIFAR100, and an embedding of size of 64 for STL10, Tiny ImageNet and ImageNet (recall that the dimension of the embedding is the same of the embedding).
As a common practice when using ResNetlike architectures for smallsize image resolutions, in all the experiments we have a first convolutional layer with kernel size 3, stride 1 and padding 1. Additionally, in case of CIFAR10 and CIFAR100, we remove the first max pooling layer.
For ImageNet experiments we use ResNet50 with SGD optimizer and momentum equal to 0.9. We use a learning rate and cosine learning rate decay. We train with minibatch of size pairs, embedding size of and for 200 epochs. Additionally we apply Gaussian blur as a data augmentation.
Image Transformation Details. In Fig 3 we show some examples of positive pairs extracted from the same image instance (Sec. 3). We extract crops with random size from to of the original area and a random aspect ratio from to of the original aspect ratio, which is a commonly used dataaugmentation technique. We also apply horizontal mirroring with probability . Finally, we apply color jitterering with probability and grayscaling with probability .
5.1 Linear Classification Protocol
The most common evaluation protocol for unsupervised feature learning is based on freezing the network encoder (, in our case) after unsupervised pretraining, and then train a supervised linear classifier
on top of it. Specifically, the linear classifier is a fullyconnected layer followed by softmax, which is placed on top of
after removing both the projection heads and .In all the experiments we train the linear classifier for 500 epochs using the Adam optimizer and the labeled training set of each specific dataset, without data augmentation. The learning rate is exponentially decayed from to . The weight decay is . In ImageNet experiments we use the evaluation protocol of [19].
5.1.1 Ablation Study.
In Tab. 2 we compare with each other different loss functions on CIFAR10 and CIFAR100. In all the experiments we use as the encoder , a ResNet18, which is always trained with 200 unsupervised epochs. The batch size is (where positive pairs, see Sec. 3). InfoNCE refers to the contrastive loss version shown in Eq. (2), largely used in many recent selfsupervised works. Normalized InfoNCE is the normalized version of InfoNCE used in [19] and in as well (Sec. 4). WMSE is our whiteningbased loss, introduced in Sec. 3. Finally, is our full method, which combines Normalized InfoNCE with WMSE using two dedicated projection heads ( and , see Sec. 4).
In case of InfoNCE and Normalized InfoNCE, a temperature parameter () must be set (Sec. 2). For these two losses, we performed a separated grid search on , using common value ranges reported in the literature, and separately choosing, independently for each loss, the value which achieves the best result across all the datasets (i.e., is kept fixed for both datasets). In Tab. 2 we report the results corresponding to the best temperature value we found, which is for InfoNCE and for Normalized InfoNCE. For the second head () in , we use the same setting () as in Normalized InfoNCE. These temperature values are then used in all the other experiments (Tab. 37). Note that WMSE does not need to tune lossspecific hyperparameters.
Method  CIFAR10  CIFAR100 

InfoNCE  79.98  54.27 
Normalized InfoNCE  85.53  56.96 
WMSE  86.08  57.47 
86.91  57.79 
Tab. 2 shows that the proposed loss, WMSE, is significantly better that InfoNCE. The results we obtained with other datasets, encoders and evaluation protocols confirm this finding (see Tab. 37). With a lower margin, WMSE outperforms also Normalized InfoNCE. The combination of WMSE and Normalized InfoNCE, obtained using our twohead solution (), further improves the classification accuracy on both datasets, showing that these two losses bring a partially complementary supervision signal to the encoder.
In Fig. 4 we plot the linear classification accuracy on CIFAR10 and CIFAR100, as a function of the number of epochs used to pretrain the corresponding encoder. These plots show that both WMSE and can accellerate the unsupervised training, being their accuracy curves higher than Normalized InfoNCE and InfoNCE almost consistently over all the epochs, and especially in the initial epochs.
5.1.2 Comparison with the State of the art.
We use CIFAR10, CIFAR100, STL10, Tiny ImageNet and ImageNet to compare our methods, WMSE and , against various unsupervised approaches. The results are reported in Tab. 3, 4, 5 and 6, split according to the datasets and the capacity of the backbone networks. In all the experiments, WMSE and significantly outperform all the other methods. Specifically, in Tab. 3 we also outperform the very recent results reported in [1] (AMDIM), where a specific network with a higher capacity (about 32M parameters) is used. For reference, in Tab. 3 and 5 we also report fullysupervised results. It is worth mentioning that for a large scale experiments on ImageNet (Tab. 6), also shows significant improvements over the recent methods [19, 21].
Method  Accuracy 

Fullysupervised  93.62 
DIM [23]  80.95 
CPC [40]  77.45 
AMDIM [1]  89.5 
InfoNCE  86.27 
WMSE  90.11 
91.90 
Method  Accuracy 

Kmeans Network [5] ^{a}  60.1 
HMP [2] ^{a}  64.5 
Stacked AE [45] ^{a}  74.33 
Exemplar [12] ^{a}  75.4 
Ye et al. [44]  77.9 
InfoNCE  78.18 
WMSE  83.05 
83.06 
Method  STL10  Tiny ImageNet 

Fullysupervised  68.7  36.60 
DIM [23]  70.00  38.09 
InfoNCE  83.14  35.34 
WMSE  85.04  40.76 
85.69  42.00 
Method  Top 1  Top 5 

MoCo [19]  60.6   
CPC [21]  63.8  85.3 
WMSE  64.48  85.97 
66.29  86.94 
Method  Accuracy 

Dundar et al. [13] ^{a}  74.1 
Cutout [9] ^{†}^{a}  87.3 
Oyallon et al. [33] ^{†}^{a}  87.6 
DeepCluster [3] ^{a}  73.4 
ADC [18] ^{a}  56.7 
DIM [23] ^{a}  77.0 
IIC [26] ^{a}  88.8 
InfoNCE  87.09 
WMSE  88.28 
91.23 
5.2 SemiSupervised Finetuning
In this section we use the finetuning protocol presented in IIC [26] in order to show the potentialities of our learned representations when trained in a semisupervised fashion. Specifically, (ResNet34) is pretrained as usual, using all the 105K training samples of STL10 (see Sec. 5). Note that in this phase we also use the 5K labeled training images but we do not use their corresponding labels, pretending these images are unlabeled. In the second, supervised stage, following the protocol described in [26], we train an MLP on top of , simultaneously finetuning . For this stage, only the 5K labeled training images are used, together with the specific dataaugmentation procedure described in [26]. The evaluation is reported in Tab. 7. establishes a new stateoftheart semisupervised result on this dataset, significantly outperforming IIC [26] (which is also based on a ResNet34 encoder) and previous work as well. In this case, InfoNCE achieves inferior results with respect to the previous state of the art [26].
6 Conclusions
In this paper we proposed a new selfsupervised loss, WMSE, which is alternative to common loss functions used in the field. Differently from the triplet loss and the contrastive loss, both of which are based on comparing an instancelevel similarity against other samples, WMSE computes only the intrapositive distances, while using a whitening transform to avoid degenerate solutions. We empirically show that WMSE achieves results constantly better than InfoNCE, the most common version of the contrastive loss, and can be jointly used with the latter using dedicated latentspaces. Our full method, outperforms other stateoftheart unsupervised approaches in different datasets and using different evaluation protocols.
References
 [1] (2019) Learning representations by maximizing mutual information across views. In NeurIPS, Cited by: §2, §2, §2, §3, §5.1.2, Table 3.
 [2] (2013) Unsupervised feature learning for rgbd based object recognition. In Experimental robotics, pp. 387–402. Cited by: Table 4.
 [3] (2018) Deep clustering for unsupervised learning of visual features. Lecture Notes in Computer Science, pp. 139–156. Cited by: Table 7.
 [4] (2011) An analysis of singlelayer networks in unsupervised feature learning. In AISTATS, Cited by: §1, 4th item.
 [5] (2011) Selecting receptive fields in deep networks. In Advances in Neural Information Processing Systems 24, pp. 2528–2536. Cited by: Table 4.
 [6] (2009) ImageNet: A LargeScale Hierarchical Image Database. In CVPR09, Cited by: 2nd item.
 [7] (2004) Cholesky factorization of matrices in parallel and ranking of graphs. In 5th Int. Conference on Parallel Processing and Applied Mathematics, Cited by: §2.
 [8] (2019) BERT: pretraining of deep bidirectional transformers for language understanding. In NAACL,, Cited by: §1, §1.

[9]
(2017)
Improved regularization of convolutional neural networks with cutout
. External Links: 1708.04552 Cited by: Table 7.  [10] (2017) Adversarial feature learning. In ICLR, Cited by: §2.
 [11] (2019) Large scale adversarial representation learning. In NeurIPS, Cited by: §2.
 [12] (2014) Discriminative unsupervised feature learning with convolutional neural networks. In Advances in Neural Information Processing Systems 27, pp. 766–774. Cited by: Table 4.
 [13] (2015) Convolutional clustering for unsupervised learning. External Links: 1511.06241 Cited by: Table 7.
 [14] (2019) Temporal cycleconsistency learning. In CVPR, Cited by: §1, §2.
 [15] (1982) The jackknife, the bootstrap, and other resampling plans. Vol. 38, Siam. Cited by: §3.

[16]
(2010)
Noisecontrastive estimation: A new estimation principle for unnormalized statistical models.
In
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics
, Cited by: §1.  [17] (2006) Dimensionality reduction by learning an invariant mapping. In CVPR, Cited by: §1, §2.
 [18] (2018) Associative deep clustering: training a classification network with no labels. In Pattern Recognition  40th German Conference, GCPR 2018, Stuttgart, Germany, October 912, 2018, Proceedings, Lecture Notes in Computer Science, Vol. 11269, pp. 18–32. Cited by: Table 7.
 [19] (2019) Momentum contrast for unsupervised visual representation learning. arXiv:1911.05722. Cited by: §1, §1, §2, §2, §2, §3.1, §3, §3, §4, §5.1.1, §5.1.2, §5.1, Table 6.
 [20] (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §3.
 [21] (2019) Dataefficient image recognition with contrastive predictive coding. arXiv:1905.09272. Cited by: §1, §2, §2, §2, §3.1, §3, §5.1.2, Table 6.
 [22] (2017) In defense of the triplet loss for person reidentification. arXiv:1703.07737. Cited by: §1, §1.
 [23] (2019) Learning deep representations by mutual information estimation and maximization. In ICLR, Cited by: §1, §1, §2, §2, §2, §2, §3, Table 3, Table 5, Table 7.
 [24] (2018) Decorrelated batch normalization. In CVPR, Cited by: §2.
 [25] (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In ICML, Cited by: §2, §3.1.
 [26] (2019) Invariant information clustering for unsupervised image classification and segmentation. In ICCV, Cited by: §1, §2, §2, §3, §5.2, Table 7.
 [27] (2009) Learning multiple layers of features from tiny images. Technical Report. Cited by: 1st item, 3rd item.
 [28] (2013) Efficient estimation of word representations in vector space. arXiv:1301.3781. Cited by: §1.
 [29] (2013) Distributed representations of words and phrases and their compositionality. In NIPS, Cited by: §1, §1.
 [30] (2019) Selfsupervised learning of pretextinvariant representations. arXiv:1912.01991. Cited by: §1, §2.
 [31] (2016) Shuffle and learn: unsupervised learning using temporal order verification. In ECCV, Cited by: §1, §2, §2.
 [32] (2016) Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, Cited by: §1, §2.
 [33] (2017) Scaling the scattering transform: deep hybrid networks. ICCV. Cited by: Table 7.
 [34] (2016) Context encoders: feature learning by inpainting. CVPR. Cited by: §2.
 [35] (2018) Learning speaker representations with mutual information. arXiv:1812.00271. Cited by: §2.

[36]
(2015)
FaceNet: A unified embedding for face recognition and clustering
. In CVPR, Cited by: §1, §1.  [37] (2019) Whitening and Coloring Batch Transform for GANs. In ICLR, Cited by: §1, §2, §3.
 [38] (2016) Improved deep metric learning with multiclass npair loss objective. In NIPS, Cited by: §1.
 [39] (2019) On mutual information maximization for representation learning. arXiv:1907.13625. Cited by: §1, §2.
 [40] (2018) Representation learning with contrastive predictive coding. arXiv:1807.03748. Cited by: §1, §2, §2, §2, Table 3.

[41]
(2008)
Extracting and composing robust features with denoising autoencoders
. In ICML, Cited by: §2, §2.  [42] (2015) Unsupervised learning of visual representations using videos. In ICCV, Cited by: §1, §2, §2.
 [43] (2018) Unsupervised feature learning via nonparametric instancelevel discrimination. arXiv:1805.01978. Cited by: §1, §1, §2, §2, §2, §3.
 [44] (2019) Unsupervised embedding learning via invariant and spreading instance feature. CVPR. Cited by: Table 4.
 [45] (2015) Stacked whatwhere autoencoders. External Links: 1506.02351 Cited by: Table 4.
Comments
There are no comments yet.