Self-Distilled Self-Supervised Representation Learning

11/25/2021
by   Jiho Jang, et al.
NAVER Corp.
Seoul National University
0

State-of-the-art frameworks in self-supervised learning have recently shown that fully utilizing transformer-based models can lead to performance boost compared to conventional CNN models. Thriving to maximize the mutual information of two views of an image, existing works apply a contrastive loss to the final representations. In our work, we further exploit this by allowing the intermediate representations to learn from the final layers via the contrastive loss, which is maximizing the upper bound of the original goal and the mutual information between two layers. Our method, Self-Distilled Self-Supervised Learning (SDSSL), outperforms competitive baselines (SimCLR, BYOL and MoCo v3) using ViT on various tasks and datasets. In the linear evaluation and k-NN protocol, SDSSL not only leads to superior performance in the final layers, but also in most of the lower layers. Furthermore, positive and negative alignments are used to explain how representations are formed more effectively. Code will be available.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/13/2020

Whitening for Self-Supervised Representation Learning

Recent literature on self-supervised learning is based on the contrastiv...
10/21/2021

CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP

Contrastive learning with the InfoNCE objective is exceptionally success...
06/03/2021

TVDIM: Enhancing Image Self-Supervised Pretraining via Noisy Text Data

Among ubiquitous multimodal data in the real world, text is the modality...
06/15/2021

Self-Supervised Learning with Kernel Dependence Maximization

We approach self-supervised learning of image representations from a sta...
03/12/2021

Information Maximization Clustering via Multi-View Self-Labelling

Image clustering is a particularly challenging computer vision task, whi...
10/01/2021

Do Self-Supervised and Supervised Methods Learn Similar Visual Representations?

Despite the success of a number of recent techniques for visual self-sup...
08/19/2021

Concurrent Discrimination and Alignment for Self-Supervised Feature Learning

Existing self-supervised learning methods learn representation by means ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

GPT[radford2018improving] and BERT[devlin2018bert] are two representative works in self-supervised learning (SSL) that use transformers[vaswani2017attention]

for the task of natural language processing (NLP). Motivated by these successes, various efforts on self-supervised representation learning

[oord2018representation, hjelm2018learning, bachman2019learning] have been made in the vision domain as well, many of which follow the recent paradigm of instance discrimination that matches representations of different views of the same image by different augmentations [chen2020simple, he2020momentum, grill2020bootstrap, caron2020unsupervised]. Recent self-supervised frameworks have focused on using transformer-based models such as ViT[dosovitskiy2020image], which demonstrated superior performance than the conventional ResNet[he2016deep] architectures. MoCo v3[chen2021empirical] and DINO [caron2021emerging] achieved state-of-the-art performances using ViT in self-supervised learning. MoCo v3 investigated the learning instability of ViT and tackled this to enhance performance, while DINO exploited the characteristics of ViT and proposed a unique MLP head to improve representation learning.

width = Framework ViT-S/32 ViT-B/16 k-NN Linear k-NN Linear SimCLR 51.5 52.8 62.1 70.5 SD-SimCLR 53.4 (+1.9) 55.3(+2.5) 64.4 (+2.4) 72.1 (+1.6) BYOL 56.4 59.8 68.1 73.7 SD-BYOL 57.9 (+1.5) 61.8(+2.0) 70.3 (+2.2) 74.5 (+0.8) MoCo v3 57.1 60.7 69.7 75.1 SD-MoCo v3 59.0 (+1.9) 63.7 (+3.0) 72.0 (+2.3) 76.0 (+0.9)

Table 1: ImageNet Evaluation.

Comparison with three competitive baselines and SDSSL. ViT-S/32 and ViT-B/16 are trained on ImageNet for 300 epochs. For each framework, ViT-S/32 and ViT-B/16 share the same set of hyper-parameters except batch size. The performances are enhanced in both k-NN and linear evaluation when SDSSL is applied to all the baseline frameworks for both networks with different capacities.

In this work, we propose Self-Distilled Self-Supervised representation Learning (SDSSL), a simple method utilizing knowledge distillation[hinton2015distilling] in SSL to learn more useful representations for downstream tasks. Our work is motivated by the recent interpretation of SSL from the perspective of mutual information (MI) maximization. Prior works[he2020momentum, chen2020simple, sordoni2021decomposed] have focused on maximizing MI between output representations, and from differently augmented input images and like (a) and (b) in Fig. 1111In this paper, any form of is used to denote the output feature of a network with layers from an input .. This is realized by using a contrastive loss (e.g. InfoNCE), in which and are trained to be closer while other samples are trained to be further, as the loss sets a lower bound for MI [oord2018representation]. In our work, rather than focusing on the lower bound, we focus on the upper bound of MI. Suppose encoder has layers. By the data processing inequality, the MI between output representations are lower than the MI between outputs of the intermediate layer, , from and the last layer from , i.e, . Since existing frameworks only attempt to maximize , is not controlled explicitly and may limit from being maximized (Fig. 1(c)). On the other hand, if we can make the upper bound, , larger, this may in turn render the optimization of easier (Fig. 1(d)). We discuss this further in Sec. 3.2.

Because our method operates in an orthogonal manner to the baseline SSL method used, we can simply apply our method to other existing works. In this work, we apply our method to three representative SSL frameworks, namely SimCLR[chen2020simple], BYOL[grill2020bootstrap], and MoCo v3[chen2021empirical], using ViT[dosovitskiy2020image]

as the backbone and show that our method improves upon the already competitive baseline. We demonstrate the effectiveness of SDSSL on ImageNet via k-nearest neighbor (k-NN) and linear evaluation. The superiority of SDSSL is also shown in various practical tasks such as copy detection, video segmentation and image retrieval. We also investigate representations found by SDSSL using recently proposed metrics

[wang2020understanding] and discover SDSSL uses a broader range of the representation space than the baselines, which may help explain why SDSSL performs particularly well in fine-grained datasets. Finally, similar to [phuong2019distillation, zhang2019your], by allowing the intermediate layers to explicitly learn the pretext task, we show that even the intermediate features outperform the baseline counterparts.

Figure 1: Information diagram depicting , and their outputs. Red circle is , orange circle is and green circle is for some intermediate layer . (a) and (b) represent the information diagram before and after maximizing MI in prior works. (c) depicts that implicitly updating may hinder the red circle from moving towards . In (d), however, explicit loss for makes green circle closer to easing the maximization of information overlap between the red circle and .

Overall, we propose a self-distillation method that lets the intermediate layers explicitly learn to discriminate instances. We show that our method, when applied on top of the conventional SSL frameworks such as SimCLR, BYOL, and MoCo v3, improves upon the corresponding baseline on various datasets and tasks. Through an ablation study, we also demonstrate that naively applying our self-distillation method leads to performance degradation and show how our approach overcomes these potential pitfalls.

  • [leftmargin=*]

  • We propose a self-distillation method that is broadly applicable to existing self-supervised frameworks orthogonal to other techniques.

  • We empirically demonstrate SDSSL outperforms SimCLR, BYOL, and MoCo v3 on various tasks and datasets. In particular, SDSSL performs better at lower layers than its counterparts.

2 Related Work

Self supervised learning

Due to the capability of deep neural networks (DNN), various fields including computer vision, natural language processing and speech processing have grown rapidly. In general, DNNs need large-scale datasets to avoid the overfitting problem caused by their massive parameters. However, collecting and annotating large-scale datasets are exceedingly time-consuming and expensive. To relieve these issues, self-supervised methods have been widely studied these days. DIM

[hjelm2018learning] maximizes mutual information between input and output. AMDIM [bachman2019learning] makes multi views for maximizing mutual information between input and output in different multi views. CPC [oord2018representation] trains the representation in the sequential model by using a contrastive method and proves InfoNCE loss maximizes the lower bound of mutual information between inputs and representations. Although SimCLR and MoCo [chen2020simple, he2020momentum] improve the performance via the contrastive learning between different views, they need a large batch size and an additional memory bank. BYOL [grill2020bootstrap] only uses positive samples by mimicking the output of the moving average network and it shows a significant improvement in the performance.

Meanwhile, with the advent of the transformer, ViT [dosovitskiy2020image] has been proposed in the vision domain and self-supervised ViTs such as MoCo v3 and DINO [chen2021empirical, caron2021emerging] have been studied, which outperform and have many advantages compared CNN-based SSL. While SSL shows promising results in many tasks, sufficient analyses on how it works have not been made. ReLIC [mitrovic2020representation] introduces the causal mechanism to explain SSL and [wang2020understanding] tries to analyze SSL by using alignment and uniformity. In this work, we also provide the analysis in Sec. 4.3 using both alignment and uniformity, to show the effect of SDSSL.

Knowledge distillation Knowledge distillation (KD) is one of the regularization methods widely used to improve performance [hinton2015distilling, NEURIPS2018_6d9cb7de, he2019knowledge]. The conventional offline KD framework utilizes a pre-trained teacher network and a student network with supervised labels to improve the performance of the student network. In contrast, online KD methods do not require any pre-trained teacher network [zhang2018deep, chung2020feature]. In the training phase, a student network and a teacher network are trained concurrently, distilling information from each network. Recently, many self-distillation works that only require the student network have been studied [radosavovic2018data, rebuffi2017icarl], where a model is trained with the knowledge from the previously trained model. Multi-exit [phuong2019distillation]

proposed a learning framework based on distillation for multi-exit architectures. This method encourages the lower-layer to mimic the higher-layer by matching their output probabilities.

In terms of only training the network with knowledge generated from multi views, SDSSL can be categorized into online self-distillation method. However, unlike the aforementioned methods, SDSSL does not require supervised labels, but instead utilize unlabeled data through the self-supervised learning methods.

3 Method

3.1 Baseline

SimCLR makes two views, and , of an input image, , (positive sample) by performing random augmentations. After obtaining representations of and using a backbone network, they are projected and the network is learned by the contrastive loss which tries to

increase cosine similarity between positive samples

while decreasing cosine similarity with negative samples (other images in the batch) using softmax function [bridle1990training].

MoCo v3 learns via a contrastive loss like SimCLR, but instead of using the identical network to generate features for and , a teacher network with exponential moving average (EMA) parameters is used. Randomly augmented and are forwarded to the student network and the teacher network, respectively, and then projected. The projected output of the student network is further processed through an additional MLP head to perform contrastive learning.

BYOL also has an EMA teacher and a predictor like MoCo v3, but learns by simply increasing the cosine similarity of positive samples without using the contrastive loss. Therefore, unlike the aforementioned SSL frameworks that utilize negative samples, the performance is robust to the batch size.

3.2 Motivation

In this section, we first explain some existing analysis of how SSL learns meaningful features from prior works. Then, we introduce the motivation of our method. Mitrovic et al. [mitrovic2020representation] introduced a new perspective to analyze SSL using causal mechanism. They assume input image is generated from two independent variables, the content variable and style variable . They also proposed that the content variable is relevant for the unknown downstream tasks, which implies that is a good representation of the image. Because random augmentation intervenes only on the

style variable but preserves the content variable, conditional distribution of pretext task of estimating

given the content , , is invariant under the augmentations. The goal of [mitrovic2020representation] is to model such that it remains invariant to stochastic augmentation of , which leads to training a neural network to extract content from a set of augmented inputs ’s.

Meanwhile, earlier works in SSL have tried to explain the mechanism of SSL from the perspective of maximizing the mutual information of two randomly augmented images. In other words, they train an encoder to extract same representations from the positive samples. Given a set of learnable neural networks , this objective can be written as

(1)
Figure 2: Graphical Model of SSL depicting the information flow. S1 and S2 are style variables that are dependent on augmentations.

As shown in Fig. 2, adopting the definition of the content variable, , from [mitrovic2020representation] - the common invariant information of randomly augmented images - the objective function is bounded by the amount of information in which is the entropy of , . Hence, maximizing the lower bound in Eq. 1 causes to move towards as shown in Fig. 1(a) and (b). In other words, the encoder is trained to extract content which is useful for various downstream tasks from and .

Now let us use the shorthand notation to denote the output of the layer for the input . Because the mapping from an intermediate layer, , to the final layer, , is deterministic and assuming discrete inputs, , it becomes and the information on is completely enclosed by that on as shown in Fig. 1(c). This in turn induces an upper bound of the mutual information between and as

(2)

where the second inequality is from the same analogy and the last equality is from the definition of which states .

This shows that the upper bound for the mutual information between two outputs from different augmentations depends on the output of the previous layer . While existing self-supervised frameworks train explicitly, is updated implicitly via the loss signal at the final layer. Due to this, adding an explicit signal for may be favorable to the information extraction process. Based on this hypothesis, we provide an additional loss signal by training to move towards explicitly as shown in Fig. 1(d). If this leads to that can extract more of from , such representations will perform better on unknown downstream tasks than those obtained from only optimizing Eq. 1.

Figure 3: Illustration of SDSSL in MoCo v3 and BYOL. For SimCLR, predictors do not exist and the teacher network is identical to the student network. Solid line is updated by while dotted line is for and .

3.3 Sdssl

We propose Self-Distilled Self-Supervised Representation Learning (SDSSL) which provides explicit signal to the intermediate representation by inducing intermediate representation to mimic the output representation as illustrated in Fig. 3. Our method can be applied to any existing SSL frameworks that match representations from multi-views.

Self-Supervised Learning (SSL) SSL enables the training of a model only using manipulation techniques over the input. The SSL frameworks used as the baselines in this paper are SimCLR, MoCo v3 and BYOL. The SSL objective functions differ for different methods. Following common practice, let denote the output of the student’s last MLP head (projector or predictor) and denote the output of the teacher’s (student in SimCLR) projector. Then the objective for BYOL is

(3)

while for the two contrasitive methods,

(4)

where denotes the inner product, is a temperature parameter and are for the positive/negative samples.

Self-Distilled SSL We define our intermediate self-distillation loss , which tries to maximize the mutual information of the output of an intermediate layer and (), as the following

(5)

where is the representation of the layer of the student encoder passed through the MLP heads corresponding to each layer, and is the output of the teacher MLP head. The stop-gradient operator, implies that the gradient is not propagated through so that only learns to predict without affecting . The objective of SDSSL consists of and , resulting in :

(6)

where the choice of , which controls the weight of the self-distillation loss, is detailed in Sec. 4.1.

# f_s: student: ViT + projectors
# f_t: momentum teacher: ViT + projector
# p: predictors
# alpha: intermediate self-distillation ratio
# tau: temperature
# L: number of layers in ViT
for x in loader:  # load a minibatch x with N samples
    x1, x2 = aug(x), aug(x)  # random augmentation
    q1, q2 = f_s(x1), f_s(x2) # shape: [L*N, dim]
    z1, z2 = f_t(x1), f_t(x2) # shape: [N, dim]
    loss_pred = ctr(p(q1.detach()), z2, L)
    loss_pred += ctr(p(q2.detach()), z1, L)
    q1, q2 = p(q1), p(q2)
    q1_isd, q1 = split(q1, [(L-1)*N, N])
    q2_isd, q2 = split(q2, [(L-1)*N, N])
    loss_isd = ctr(q1_isd, z2, L-1)
    loss_isd += ctr(q2_isd, z1, L-1)
    loss = ctr(q1, z2) + ctr(q2, z1)
    loss += alpha * loss_isd + L * loss_pred
    loss.backward()
    optimizer.update(f_s, p)
    momentum_update()
# contrastive loss
def ctr(q, z, num_layers=1):
    logits = mm(q, z.t())  # [num_layers*N, N] pairs
    labels = repeat(arange(N), num_layers)
    loss = CrossEntropyLoss(logits/tau, labels)
    return 2 * tau * loss
Algorithm 1

SD-MoCo v3: PyTorch-like Pseudocode

We observe that for frameworks where predictors exist, simply using Eq. 6 leads to some performance improvement, but can be further enhanced. This is because the predictors of the intermediate layers are only updated using gradients from as opposed to the encoder, which is able to utilize both and . Consequently, the optimality of the predictors at intermediate layers are not guaranteed, which is a key component of SSL training as discussed by [grill2020bootstrap]. Simply enlarging causes the last predictor to be sub-optimal, because this updates the intermediate backbone layers as well. To alleviate this issue, we employ another loss :

(7)

where is the representation of the layer of the student after passing through the projector. To only update the predictors, the operator is used to . By doing so, we attain better predictors and the final loss for SSL frameworks with predictors is

(8)

We use in Eq. 8. Algorithm 1 provides the pseudocode for SD-MoCo v3 which applies SDSSL to MoCo v3.

4 Experiments

In this section we describe details of our implementation. We follow the implementation of MoCo v3 [chen2021empirical] unless otherwise noted. We show that SDSSL outperforms the baselines in various downstream tasks including ImageNet. Furthermore, the ablation study demonstrates the efficacy of each factors of SDSSL.

4.1 Implementation Details

ViT Architecture We adopt the sine-cosine variant [vaswani2017attention] in 2-D for positional embedding and freeze the random initialized patch projector. We concatenate patch embedding with a learnable [CLS] token and add its positional embedding. The representations are outputs of [CLS] token after passing through each transformer block and the layer normalization layer [ba2016layer].

MLP Heads Following [chen2020simple, grill2020bootstrap]

, projectors are 3-layer MLPs and predictors are 2-layer MLPs. Batch normalization

[ioffe2015batch] is applied to all output layers except BYOL and the hidden layers for all methods. The dimension of the hidden layer is 4096 for the last projector and all predictors, but 2048 for intermediate projectors. All outputs have 256 dimension. For frameworks using exponential moving average (EMA) teacher, the teacher’s projector is updated using the student’s projector via EMA. This is done in SDSSL as well using only the last projector.

Hyper-parameter We use AdamW [loshchilov2017decoupled] as the optimizer and batch size of 1024 for ViT-B/16, and 4096 for ViT-S/32. Learning rate is 1.5e-4 for MoCo v3 and BYOL, 1.3e-4 for SimCLR. We also adopt learning rate warmup for 40 epochs and cosine decay after warmup [goyal2017accurate]. Weight decay is 0.1. For , cosine scheduling [loshchilov2016sgdr] is performed from 00.8 for ViT-B/16 and 00.6 for ViT-S/32.

4.2 Main Results

ImageNet Pretraining We experiment with ViT-B/16 and ViT-S/32 on three self-supervised learning frameworks. In Table 1 we validate the representations found in ImageNet [deng2009imagenet] pretrained encoder through using k-NN [wu2018unsupervised] and linear evaluation. We follow the protocol of MoCo v3 for the linear evaluation and DINO for k-NN. Across all frameworks, models, and evaluations, applying SDSSL increases performance. The baseline accuracies are lower than those reported in MoCo v3 paper [chen2021empirical], because of using 1024 batch size instead of 4096 due to computation constraint. Contrastive frameworks are particularly affected by the batch size. Nevertheless, our method significantly improves upon our reproduced baselines. In ViT-S/32 model, linear evaluation performance improved more than it did on k-NN, whereas in ViT-B/16 model, the opposite was true. We used 8 NVIDIA A100 for five days to train our ViT-B/16 models and 4 NVIDIA A6000 for three days to train ViT-S/32 models.

width= ViT-T/32 ViT-S/32 ViT-B/32 ViT-B/16 MoCo v3 44.6 57.1 62.9 69.7 SD-MoCo v3 45.3 59.0 65.0 72.0 Diff. +0.7 +1.9 +2.1 +2.3

Table 2: k-NN performances in ImageNet pretrained MoCo v3 with SDSSL for various ViT models with different capacities. As the model capacity increases, the performance gain of SDSSL also increases.

[margins=hangleft,capbesideposition=right,top,capbesidewidth=3.7cm]figure

Figure 4: Multi-exit. Linear evaluations on Imagenet for MoCo v3 and SD-MoCo v3 on each layer training ViT-S/32 on ImageNet for 300 epochs. SD-MoCo v3 outperforms the baseline at all layers and shows less degradation for earlier layers.

ViT Capacity Here we analyze the effect of model capacity on the performance. When compared with the MoCo v3 baseline by applying SDSSL to ViT-T/32, ViT-S/32 and ViT-B/32, it can be seen that the performance gain also increases as the model capacity increases as shown in Table 2. In addition, it can be observed that the gain due to SDSSL is slightly higher when the parameters of the model are the same but smaller patch size is used.

Multi-exit Since self-distillation enables lower layers to learn from the higher layers, we expect that the lower layers of SDSSL learned more meaningful representations than those of baselines. This is verified in Figure 4, which shows that the representations of lower layers for SDSSL are much more suitable as features than the counterparts of vanilla MoCo v3. We performed linear evaluation on ImageNet using frozen representations of each layer. In the last layer, the accuracy increased by 3%p, and the 7th layer showed the largest performance gap of 25.5%p.

4.3 Transferability

In this subsection, we evaluate the transferability of our method on various downstream tasks. Following DINO [caron2021emerging], we evaluate on the image retrieval task. In addition, we also evaluate on the copy-detection task and the video segmentation task, which uses features of patches

rather than the [CLS] token. The three evaluation protocol do not require additional training of the encoder. Then, we evaluate on other image classification datasets such as CIFAR-10, CIFAR-100

[krizhevsky2009learning], Oxford Flowers-102 [nilsback2008automated], and Oxford-IIT-Pets [parkhi2012cats] by k-NN evaluation and end-to-end fine-tuning [dosovitskiy2020image]. Experiments are performed using all the three frameworks.

Figure 5: Copy detection and video segmentation. Results of copy detection and video segmentation tasks on MoCo v3 and SD-MoCo v3 for each layer. With the exception of the some layers, SD-MoCo v3 outperforms MoCo v3. The best performing layers are 9th and 5th for SD-MoCo v3 and 11th and 10th for MoCo v3 in copy detection and video segmentation, respectively.

width = Framework Copy D. Video S. mAP SimCLR 67.4 39.8 42.6 36.9 SD-SimCLR 68.4 (+1.0) 40.2 (+0.4) 43.4 (+0.8) 37.1 (+0.2) BYOL 76.2 37.5 40.5 34.6 SD-BYOL 77.9 (+0.7) 37.2 (-0.3) 40.1 (-0.4) 34.3 (-0.3) MoCo v3 68.2 36.7 39.6 33.8 SD-MoCo v3 69.1 (+0.9) 39.0 (+2.3) 41.8 (+2.2) 36.2 (+2.4)

Table 3: Copy detection and video segmentation. For all scores, higher means better. The reported scores are the performance of the best layer in each method. ImageNet pretrained ViT-S/32 models are used to evaluate.

Copy-detection We report the mean average precision (mAP) of copy-detection on the strong subset of INRIA Copydays dataset [douze2009evaluation]. The goal of Copy-detection is to recognize the original image when given a distorted (e.g. blur, insertion, print, scan) version of it. Following [berman2019multigrain], we use the 10K samples of the YFCC100M dataset [thomee2016yfcc100m] as distractors, while 20K samples are used for whitening [berman2019multigrain] the features. The features of [CLS] token and patch token are pooled using GeM [radenovic2018fine] and concatenated. We use features of all layers to verify whether similar trend occurs as in the multi-exit experiment. We observe in Figure 5 that most SD-MoCo v3 intermediate features surpass those of MoCo v3 and have better performance on the respective best-performing features. For SD-MoCo v3 and MoCo v3, this is the 9th and 11th layer, respectively. We believe that the best-performing features are not formed in the final layer for some tasks that utilizes the features of the patch rather than using only the [CLS] token. Moreover, for SDSSL the best-performing layer is formed in the lower layers than the baseline. For SD-SimCLR and SimCLR, the best-performing layers are 11th and 10th, respectively; for SD-BYOL and BYOL, 12th and 8th layer, respectively. This may be explained by our motivation as our method intends to extract more information in the lower layers about the content rather than the style of an image. By providing an explicit loss, our method forms a suitable feature for copy detection earlier in the layers than the baseline.

Video segmentation We perform video instance segmentation on the DAVIS-2017 dataset [pont20172017]. We follow the experimental protocol in Jabri et al. [jabri2020space] and segment scenes with a nearest neighbor between consecutive frames in DINO. When all representations of all layers are tested as done in copy detection, a similar trend is observed in the video segmentation task as well. The best performing layer is 6th and 10th for SD-MoCo v3 and MoCo v3, respectively, and SD-MoCo v3 outperforms MoCo v3 as shown in Table 3. SD-SimCLR performs slightly better than SimCLR, both with the best performance in the 7th layer. However, in the case of SD-BYOL, the performance is slightly reduced compared to BYOL. In BYOL, the 7th layer is the best, whereas in SD-BYOL, the 2nd layer has the best performance.

width = Framework ROx RPar M H M H SimCLR 17.7 3.4 34.9 10.4 SD-SimCLR 19.0 (+1.3) 3.9 (+0.5) 36.4 (+1.5) 11.7 (+1.3) BYOL 23.0 5.7 44.5 16.5 SD-BYOL 23.4 (+0.4) 5.7 (+0.0) 45.0 (+0.5) 17.0 (+0.5) MoCo v3 20.3 4.2 42.5 16.3 SD-MoCo v3 23.2 (+2.9) 5.9 (+1.7) 44.6 (+2.1) 17.4 (+1.1)

Table 4: Image Retrieval. Comparison of performance between baseline and SDSSL on image retrieval task. ViT-S/32 are pre-trained using each framework on ImageNet for 300 epochs. We evaluate image retrieval task using k-NN.

Image Retrieval Revisited [radenovic2018revisiting] Oxford and Paris image retrieval datasets [philbin2008lost] contain 3 splits of various difficulty with query and database pairs. We evaluate all baselines and SDSSL on the Medium and Hard splits. We directly apply k-NN for image retrieval. As shown in Table 4, SDSSL outperform baselines.

width = 1.0 Framework CIFAR-10 CIFAR-100 Flower Pets SimCLR 87.2/97.9 64.5/87.5 70.4/91.7 69.1/84.7 SD-SimCLR 86.8/98.1 65.0/87.3 71.3/92.2 71.1/85.5 BYOL 90.6/97.8 70.1/86.7 76.0/90.2 77.0/84.1 SD-BYOL 90.5/98.0 70.6/86.9 77.2/90.7 79.9/86.0 MoCo v3 89.8/98.3 69.2/88.0 72.6/92.4 77.5/86.6 SD-MoCo v3 89.4/98.4 69.2/88.1 76.5/93.3 79.8/88.0

Table 5: Classification. We report k-NN and fine-tuning performances (k-NN/fine-tuned accuracy) for four classification datasets. The performances for CIFAR-10 and CIFAR-100 are comparable, but SD-MoCo v3 outperforms by a large margin for Flower and Pets.

Classification In this section, we demonstrate the results for image classification on CIFAR-10, CIFAR-100, Oxford Flowers-102, and Oxford-IIT-Pets. Since end-to-end fine-tuning maybe lead to over-fitting on the particular dataset, this may obscure whether the representations of the pre-trained encoder is actually good [radford2021learning]. Due to this, we also report numbers for k-NN evaluation. Table 5 shows that for Flowers and Pets both k-NN evaluation and fine-tuning leads to large performance gap compared to the baseline, while the gap is relatively small or slightly falls behind the baseline for CIFAR-10 and CIFAR-100. The two groups of datasets have distinct characteristics in that the former (Flowers and Pets) are composed of homogeneous classes, while the latter have distinct classes such as automobile, airplane, deer, etc. In the next subsection, we provide further analysis on why SDSSL performs exceptionally well on such datasets that require fine-grained features.

4.4 Analysis

Wang et al. [wang2020understanding] demonstrated that contrastive learning optimizes two distinct metrics: (1) Alignment, which quantifies compactness of representations of positive samples

(9)

for some . And (2) uniformity, which measures how dispersed the entire representations are in a hypersphere using the Gaussian potential kernel (also known as the RBF kernel) [cohn2007universally, borodachov2019discrete]

(10)

is the distribution of positive pairs generated by random augmentation from the input data, and is the input data distribution. They asserted that low alignment signifies the positive samples are close to each other while low uniformity signifies that the negative samples are further apart. Thus, low alignment and uniformity lead to a better representation with high linearly separability, although the two metrics are inherently in a trade-off relationship.

Empirically, we observed that SD-MoCo v3 has higher alignment, but lower uniformity than vanilla MoCo v3. However, considering their conflicting characteristics, it is difficult to ascertain which representation is better. To answer this question, we propose another metric modifying the alignment metric to quantify the difference of alignment between negative samples and positive samples. Alignment between negative samples is defined as follow

(11)

Higher means that the negative samples are further apart from each other similar to uniformity.

Figure 6: Alignment and uniformity measured at each layer of MoCo v3 and SD-MoCo v3 on ImageNet validation set. Because uniformity and alignment have different signs due to the logarithm of uniformity, we report for consistency.

[margins=hangleft,capbesideposition=right,top,capbesidewidth=3.7cm]figure

Figure 7: Alignment Difference. Comparing of each layer. Higher value indicates comparative compactness of positive representations with respect to negative samples. Ours have higher alignment difference in all layers except for the few lower layers. The values are measured on the ImageNet validation set with .

The difference of negative alignment and positive alignment then quantifies the difference of mean distance between the positive samples and that of the negative samples. As shown in Figure 7, SD-MoCo v3 has higher alignment difference than MoCo v3 in almost all layers. In particular, when is adequately high with high positive alignment, a representation may be sufficiently far apart from negative representations and the positive samples will also relatively dispersed, which makes it easier to distinguish between positive samples that are potentially different classes in fine-grained datasets. This may explain why SDSSL performs exceptionally well for Flowers and Pets on Table 5, which require more fine-grained representations due to the homogeneous classes compared to CIFAR datasets.

[capbesideposition=right,top,capbesidewidth=4cm]table k-NN MoCo v3 48.5   +pred. loss 48.6 (+0.1) SD-MoCo v3 50.0   -ratio anneal 48.2 (-1.2)   -pred. loss 49.3 (-0.7)

Table 6: Ablation. We experiment ViT-S/32 100 epochs on ImageNet using MoCo v3. This shows that ratio scheduling is indispensable for SDSSL and pred loss also helps greatly in increasing performance.

4.5 Ablation Study

In this subsection, we show the efficacy of ratio scheduling and the predictor loss through ablation and verify these are necessary factors for optimal performance.

Table 6 shows the performance of ablating the predictor loss results in performance degradation. As discussed, this is consistent with the results in [grill2020bootstrap] showing that optimality of predictor is crucial.

Additionally, when only the predictor loss is used without the intermediate distillation loss , the performance change is minimal. This verifies that the intermediate distillation loss is a key component.

During training, we used ratio annealing in Eq. 6 and Eq. 8, i.e, is set very low at initial iterations and gradually increased afterward rather than using fixed for the entire training. Without ratio annealing the performance decreases significantly, which shows that self-distilling once some training has been done is important.

5 Discussion

We have explained the mechanism of SDSSL from an information-theoretic perspective using mutual information. Nonetheless, many factors of SDSSL can be interpreted using well-known studies of knowledge distillation. Yosinski et al. [yosinski2014transferable] have proposed that representations of higher layers contain more of the task-specific information than those of the lower layers. Likewise, the output representations of self-supervised networks will have representations that are more focused on the instance discrimination pretext task. This explains the observation of the multi-exit experiment, in which the lower layers of SDSSL have better representations than the baselines by allowing the lower layers to explicitly learn the pretext task as well. Additionally, the scheduling of can also be explained by the performance of the teacher. Because the teacher does not have sufficient representation of the pretext task early in the training, should be lower and increased later on to distill better representation.

Figure 8: Representations in hypersphere. An illustration of representations of the student’s low layer and the teacher’s output on a hypersphere. Intermediate self-distillation loss explicitly shifts the representations of the low layer to the output representations.

From a more intuitive perspective, the [CLS] token starts from a single representation for all images, and slowly aligns with the representations of the positive samples and distances from those of the negative samples as the layer progresses. Through , SDSSL induces the unaligned intermediate representations to mimic the output representations, which influences the features in Fig. 8(a) to become more like Fig. 8(b). Then, the next layer will receive features that are more aligned to the corresponding class and separated from the other classes than the original representations, which makes the instance discrimination task easier for forthcoming layers, leading to a better representation. In other words, makes the representation from the earlier layer more dispersed between negative samples, and align positive samples more effectively, which means that the representation space is used efficiently. Figure 7 shows this phenomenon quantitatively. Visualizations of the representations in the lower layers using t-SNE (shown in the appendix) also support this phenomenon.

6 Conclusion

In this work, we proposed a self-distillation method generally applicable to existing self-supervised learning frameworks. From the mutual information maximization perspective, our method is motivated by the hypothesis that maximizing the upper bound of mutual information between two views may be favorable for representation learning, and through experiments, we empirically validated the effectiveness of our method. We showed SDSSL leads to superior performance not only in the final layers, but also in various lower layers through the multi-exit experiment. In the future, our method should be applied to other techniques that lead to further performance gains such as larger model capacity, smaller patches, and multi-crop images. Additionally, more rigorous theoretical analyses should give insights into the empirically superior performances.

References

Appendix A Representation Visualization

We visualize the representations of each layer of MoCo v3 and SD-MoCo v3 for five random classes among the ImageNet validation set with t-SNE [van2008visualizing] in Fig. 9. We observe that the lower layer representations of SD-MoCo v3 are more cohesive than the representations of the same layer of MoCo v3.

Appendix B Copy detection and Video Segmentation

We additionally report the performances of each layer of the baseline and SDSSL models for the copy detection and video segmentation tasks in Tab. 7, 8, 9, and 10.

12 11 10 9 8 7 6 5 4 3 2 1
SimCLR 65.0 67.4 66.0 66.0 61.4 61.0 54.2 47.3 38.9 25.6 24.1 12.6
SD-SimCLR 64.7 64.9 68.4 67.4 65.4 64.1 61.0 59.2 52.9 46.2 31.8 14.3
BYOL 76.2 61.5 61.0 60.1 59.7 49.8 49.4 46.0 28.7 23.4 16.9 21.5
SD-BYOL 43.5 76.2 74.2 76.5 77.9 72.4 72.4 68.2 63.7 52.9 44.0 38.7
MoCo v3 67.9 68.2 67.6 67.1 67.5 64.3 62.3 56.1 51.7 42.7 32.3 21.3
SD-MoCo v3 66.4 67.1 68.0 69.1 68.0 69.0 65.8 62.3 59.4 51.4 42.7 18.1
Table 7: Copy detection. The mAP performances of performing copy detection using the features of each layer of the baseline and SDSSL models. The best mAPs are in bold face while the second best are underlined.
JF-mean J-Mean J-Recall J-Decay F-mean F-Recall F-Decay
SimCLR / 12 32.5 34.1 29.0 18.3 30.9 16.9 14.6
11 37.5 40.1 37.6 15.6 34.9 20.4 12.6
10 39.3 41.8 40.9 17.0 36.8 26.6 14.4
9 39.2 41.8 41.2 15.4 36.6 26.6 14.0
8 39.3 42.3 41.1 15.4 36.3 26.5 14.6
7 39.8 42.6 41.6 16.1 36.9 26.6 14.4
6 39.5 42.4 40.5 15.6 36.6 26.1 13.8
5 38.9 42.1 40.6 15.4 35.8 23.6 14.7
4 36.6 39.7 37.0 16.7 33.6 21.8 16.2
3 35.0 37.9 35.4 16.7 32.1 19.6 16.3
2 31.5 34.0 30.3 17.9 28.9 16.3 16.2
1 26.7 29.0 26.3 15.7 24.3 11.0 13.9
SD-SimCLR / 12 31.8 33.6 26.7 17.7 30.1 15.9 15.7
11 38.7 41.4 40.0 14.2 35.9 22.7 12.0
10 39.3 41.7 42.8 14.1 36.8 26.4 13.1
9 39.8 42.8 42.0 15.3 36.9 27.3 14.3
8 39.3 42.4 40.6 16.0 36.3 25.8 15.9
7 40.2 43.4 42.6 15.3 37.1 27.3 14.9
6 40.2 43.1 41.0 15.3 37.2 27.2 14.1
5 38.3 41.5 39.9 16.4 35.1 24.3 16.4
4 37.1 40.2 36.7 17.1 34.1 23.1 16.6
3 34.8 37.3 32.7 18.7 32.3 21.3 16.2
2 32.0 34.2 30.4 17.0 29.8 16.1 15.3
1 24.7 26.8 22.1 15.9 22.5 8.8 13.6
Table 8: Video Segmentation in SimCLR. Results of video segmentation on each layer of SimCLR and SD-SimCLR.
JF-mean J-Mean J-Recall J-Decay F-mean F-Recall F-Decay
BYOL / 12 30.6 32.0 26.4 20.1 29.1 14.6 15.1
11 34.7 37.9 33.5 17.9 31.5 17.9 16.1
10 32.7 35.4 30.7 18.9 29.9 16.8 17.3
9 28.1 30.4 27.0 17.7 25.8 12.6 16.9
8 37.4 40.4 38.6 16.9 34.4 21.7 15.3
7 37.5 40.5 38.6 17.3 34.6 22.3 15.8
6 37.5 40.3 38.2 17.3 34.6 22.0 15.4
5 37.5 40.3 37.4 17.7 34.7 21.7 15.7
4 37.3 40.2 37.4 17.8 34.4 21.7 15.6
3 37.2 40.2 37.3 17.5 34.2 21.4 15.2
2 36.6 39.6 35.6 18.5 33.6 20.1 16.0
1 35.9 39.1 34.9 18.4 32.6 18.6 16.3
SD-BYOL / 12 30.7 32.2 26.9 15.5 29.3 14.3 12.5
11 33.1 35.9 32.2 19.3 30.3 16.3 17.3
10 31.0 33.3 29.5 19.2 28.7 15.0 17.2
9 26.6 28.7 26.1 20.6 24.5 10.5 16.4
8 36.5 38.9 35.8 16.0 34.1 20.6 13.3
7 36.7 39.2 36.8 16.4 34.3 20.7 13.0
6 36.5 39.0 36.9 16.9 34.0 20.2 13.5
5 36.4 38.8 36.8 17.1 33.9 20.6 13.4
4 36.3 38.7 37.8 17.7 34.0 22.6 14.9
3 36.4 38.8 37.7 17.2 33.9 22.5 14.7
2 37.2 40.1 38.9 15.5 34.3 22.3 13.5
1 35.8 38.5 36.3 17.5 33.1 20.9 14.7
Table 9: Video Segmentation in BYOL. Results of video segmentation on each layer of BYOL and SD-BYOL.
JF-mean J-Mean J-Recall J-Decay F-mean F-Recall F-Decay
MoCo v3 /12 35.7 38.0 34.4 15.2 33.3 19.9 13.4
11 33.0 35.5 30.1 17.3 30.6 12.9 16.6
10 36.7 39.6 38.4 13.7 33.8 19.8 12.4
9 36.0 38.8 35.8 14.8 33.2 19.0 14.5
8 36.7 39.6 37.2 14.6 33.7 20.2 14.4
7 36.3 39.3 36.4 15.7 33.4 19.3 15.3
6 36.1 38.9 36.0 17.2 33.3 19.1 15.6
5 36.3 38.9 34.3 16.7 33.8 20.9 15.2
4 34.3 36.8 33.3 18.1 31.8 18.2 15.7
3 30.9 32.8 30.4 17.5 28.9 15.6 15.0
2 28.1 30.2 27.2 16.8 25.9 13.3 15.3
1 23.7 25.3 21.1 15.5 22.2 6.2 14.9
SD-MoCo v3 / 12 34.4 36.6 31.5 14.8 32.2 17.3 13.1
11 37.2 39.6 38.1 14.0 34.8 21.6 13.0
10 37.0 39.4 37.7 15.3 34.6 21.5 13.5
9 38.0 40.6 39.7 15.0 35.4 22.6 13.1
8 38.6 41.3 39.7 15.2 35.9 23.7 13.5
7 39.0 41.7 40.7 15.8 36.2 24.3 14.2
6 39.0 41.8 40.2 15.5 36.2 24.9 13.4
5 38.6 41.6 38.3 16.4 35.6 23.4 15.5
4 36.6 39.7 36.2 18.5 33.5 20.9 17.5
3 34.4 37.2 32.8 17.6 31.7 18.8 16.3
2 33.1 35.8 31.5 17.9 30.4 18.4 17.6
1 25.1 27.0 23.7 17.8 23.3 8.9 15.4
Table 10: Video Segmentation in MoCo v3. Results of video segmentation on each layer of MoCo v3 and SD-MoCo v3.

Appendix C Distillation in same view

In SDSSL the low-layer representations of the student mimic the output representation of the teacher (different view). However, like prior self-distillation works, [phuong2019distillation, zhang2019your] distillation can be performed in the same view (student’s output representation). We use the contrastive loss between the low layer representations and the output representation of the same view instead of in SimCLR. Although there is an increase in performance compared to the baseline, the performance is not comparable to that of SD-SimCLR as shown in Tab. 11.

k-NN
SimCLR 47.2
SD-SimCLR 49.0
   in same view 48.4 (-0.6)
Table 11: Distillation in same view. We experiment ViT-S/32 200 epochs on ImageNet using SimCLR. This shows that distilling the output of the same view degrades the k-NN performance compared to SDSSL.