1 Introduction
The recent advances and breakthroughs in 1bit convolutional neural networks
^{1}^{1}1Both weights and activations are binary. (1bit CNNs), also known as binary neural networks [courbariaux2016binarized, rastegari2016xnor] mainly lie in supervised learning [liu2020reactnet, martinez2020training]. With the binary nature of BNNs, such networks have been recognized as one of the most efficient and promising deep compression techniques for deploying models in resourcelimited devices. Generally, as introduced in [rastegari2016xnor], BNNs can produce up to 32 compressed memory and 58 practical computational reduction on a CPU or mobile device. Considering the immense potential of being directly deployed in intelligent devices or lowpower hardware, it is well worth further studying the behaviors of selfsupervised BNNs (SBNN), i.e., BNNs without humanannotated labels, both to better understand the properties of BNNs in academia, as well as to extend the scope of their usage in industry and realworld applications.XNOR (Reimpl.) [rastegari2016xnor]  BiReal Net [liu2018bi]  ReActNet [liu2020reactnet]  
Top1  Top5  Top1  Top5  Top1  Top5  
Supervised BNN  51.200  73.200  56.400  79.500  69.400  88.600 
Contrastive Learning (MoCo V2)  Realvalued  –  –  50.296  75.206  60.776  82.830 
Contrastive Learning (MoCo V2)  BNN (Baseline)  23.880  44.690  42.816  67.712  46.922  70.712 
Contrastive Learning w/ Adam + lite aug. + progressive binarizing etc. (Ours)  BNN 
–  –  –  –  52.452  76.080 
+ Guided Learning (Ours)  BNN  –  –  –  –  56.022  79.168 
Guided Learning Only (Ours)  BNN  36.996  61.416  51.242  75.890  61.506  83.512 
The goal of this paper is to study the mechanisms and properties of BNNs under the selfsupervised learning scenario, then deliver practical details and guidelines on how to establish a strong selfsupervised framework for them. To achieve this purpose, we start from exploring the widely used selfsupervised contrastive learning in realvalued networks. Hence, our first question in this paper is: Is the wellperforming contrastive learning in realvalued networks still suitable for selfsupervised BNNs? Intuitively, binary networks are quite different from realvalued networks on both learning optimization and backpropagation of gradients since the weights and activations in BNNs are discrete, causing dissimilar predictions between the two different types of networks, as illustrated in Fig. 1. We answer this question by exploring the optimizer (SGD or the adaptive Adam optimizer), learning rate scheduler, data augmentation strategies, etc., and give optimal designs for selfsupervised BNNs. These nontrivial studies enable us to build a base solution which brings about 5.5% improvement over the naïve contrastive learning of the baseline.
Subsequently, we empirically observe that the realvalued networks always achieve much better performance than BNNs on selfsupervised learning (the comparison will be given later). Many recent studies [liu2020reactnet, martinez2020training] have shown that BNNs demonstrate sufficient capability to achieve accuracy as high as the realvalued counterparts in supervised learning, but an appropriate learning strategy is required to unleash the potential of binary networks. Our second question is thus: What are the essential causes for the performance gap between realvalued and binary neural networks in selfsupervised learning? It is natural to believe that if we can expose the causes behind the inferior results and also find a proper method for training selfsupervised BNNs to mitigate the obliterated/poor accuracy, we can categorically obtain more competitive performance for selfsupervised BNNs. Our discovery on this perspective is interesting: we observe that the distributions of predictions from BNNs and realvalued networks are significantly different but after using a frustratingly simple method through a teacherstudent paradigm to calibrate the latent representation on BNNs, the performance of BNNs can be boosted substantially, with an extra 4% improvement.
Concretely, to address the issue of how to maximize the representation ability of selfsupervised BNNs, we propose to add an additional selfsupervised realvalued network branch to guide the target binary network learning. This is somewhat like knowledge distillation but the slight difference is that our teacher is a selfsupervised learned network and the class for the final output is agnostic. We force the BNNs to mimic the final predictions of realvalued models after the projection MLP head and the softmax operation. In our framework, we introduce a strategy that enables the BNNs to mimic the distribution of a realvalued reference network smoothly. This procedure is called guided distillation in our method. Combining contrastive and guided learning is a spontaneous idea for tackling this problem, while intriguingly, we further observe that solely employing guided learning without contrastive loss can dramatically boost the performance of the target model by an additional 5.5%. This is surprising since, intuitively, combining both of them seems a better choice. To shed further light on this observation, i.e., contrastive learning is not necessary for directly training selfsupervised BNNs, we study the learning mechanism behind contrastive and guided/distilled techniques and derive the insights that contrastive and guided learning basically focus on different aspects of feature representation. Distillation forces BNNs to mimic the reference network’s predictive probability, while contrastive learning tries to discover and learn the latent patterns from the data itself. This paper does not argue that learning the isolated patterns by contrastive learning is not good, but from our experiments, it shows that recovering knowledge from a welllearned realvalued network with extremely high accuracy is more effective and practical for selfsupervised BNNs. An overview of our improvement over various architectures is shown in Table
1.To summarize, our contributions in this paper are:

We are the first to study the problem of selfsupervised binary neural networks. We provide many practical designs, including optimizer choice, learning rate scheduler, data augmentation, etc., which are useful to establish a base framework of selfsupervised BNNs.

We further propose a guided learning paradigm to boost the performance of selfsupervised BNNs. We discuss the roles of contrastive and guided learning in our framework and study the way to use them.

Our proposed framework improves the naïve contrastive learning by 5.515%
on ImageNet, and we further verify the effectiveness of our learned models on the downstream datasets through transfer learning.
2 Related Work
Binary Neural Networks. Binary neural networks [courbariaux2016binarized, rastegari2016xnor, lin2017towards, liu2018bi, phan2020binarizing, martinez2020training, liu2020reactnet] have been widely studied in the recent years. The first work can be traced back to EBP [soudry2014expectation] and BNNs [courbariaux2016binarized]. After that, many interesting works have emerged. XNOR Net [rastegari2016xnor] is a representative study that proposed the realvalued scaling factors for multiplying with each of binary weight kernels, this method has become a commonly used binarization strategy in the community and boosted the accuracy of BNNs significantly. Realtobinary [martinez2020training] adopted the better training scheme and attention mechanism to propagate binary operation on the activations and obtained better accuracy. ReActNet [liu2020reactnet] further studied the nonlinear activations for BNNs and built a strong baseline upon MobileNet [howard2017mobilenets]. The proposed method achieved very competitive performance on largescale ImageNet.
Selfsupervised Learning. Selfsupervised learning (SSL) is a technique that aims to learn the internal distributions and representations automatically through data, meanwhile, without involving any human annotated labels. Early works mainly stemmed from reconstructing input images from a latent representation, such as autoencoders [vincent2008extracting], sparse coding [olshausen1996emergence]
, etc. Following that, more and more studies focused on exploring and designing handcrafted pretext tasks, such as image colorization
[zhang2016colorful], jigsaw puzzles [noroozi2016unsupervised], rotation prediction [gidaris2018unsupervised], pretextinvariant representations [misra2020self], etc. Recently, contrastive based visual representation learning [hadsell2006dimensionality] has attracted much attention in the community and achieved breakthroughs and promising results. Among them, MoCo [he2020momentum] and SimCLR [chen2020simple] are two representative methods emerged recently. Also, many interesting works [oord2018representation, hjelm2018learning, bachman2019learning, tian2019contrastive, shen2020rethinking, grill2020bootstrap, caron2020unsupervised] have been proposed. In this paper, we expose that distillation process from a selfsupervised strong teacher to the efficient binary student is more effective than learning binary student directly using contrastive learning. A concurrent study SEED [fang2021seed] also employed selfsupervised distillation loss, which can be considered as a contemporaneous work of ours.Selfsupervised Learning on BNNs. To the best of our knowledge, there are no existing works focusing on exploring BNNs with selfsupervised scheme. The proposed approach in this paper has very appealing advantages on this direction. We will elaborate and validate the proposed method in the following sections. In the network quantization area, Vogel et al. [8714901] presented a nonretraining method for quantizing networks. This may be the closest work to our study. However, they used the intermediate features of the network based on the valid input samples to supervise the quantization procedure, which is not related to this work, also entirely different from the perspective of our contrastive based or guided learning paradigms.
3 Optimizer Effects of SSL on BNNs
Saturation on Activations and Gradients. We first introduce a simple yet interesting activation saturation phenomenon in BNNs that the absolute value of activations exceeds one, and the corresponding gradients are suppressed to be zero by the formulation of approximation in the derivative of the sign function [ding2019regularizing]. We study this perspective to explain why the optimizer used in selfsupervised method, e.g., MoCo [he2020momentum] with SGD works well for realvalued networks, but it is not optimal on binary networks. This exploration can help us determine which optimizer is superior and optimal for our proposed method. Upon our observation, activation saturation emerges in most layers of a binary network and it always affect the magnitude of gradients critically on different channels. As shown in Fig. 2
, we visualize the activation distributions of the first binary convolution layer of our networks. We can observe that, for the particular input batch images, a large number of activations exceed the bounds of 1 and +1, which causes the gradient passing those neurons become zerovalued.
Different Optimizer Effects. The power of Adam optimizer stems from the regularization effect of secondorder momentum, which we find is crucial to revitalize the dead weights i.e., zerovalued ones due to the activation saturation in BNNs, as introduced above. Interestingly, Adam can empower most of the weights to be active again and find a better optimum with higher generalization ability, the visualization of weight distribution from SGD and Adam in the first layer is shown in Fig. 3. The red dotted lines are references at the value of 0.025. The green polylines are the norm values of weights in each output channel for better comparing to the numerical value from SGD and Adam. It is obvious that Adam contains overwhelmingly larger weights than SGD, which reflects the weights optimized by SGD are not as good as those with Adam.
In contrast to the SGD optimizer that only accumulates the first momentum, the adaptive method Adam, naturally uses the accumulation in the second momentum to amplify the learning rate regarding the gradients with small historical values. SGD with momentum updating is used to help accelerate and dampen oscillations on gradients, it can be formulated as: where is the gradient and is exponential rate. The updating rule in Adam is defined as: and denote exponential moving averages of the gradient and the squared gradient, respectively. With drawing
of the uncentered gradient variance, the update value is normalized to alleviate the discrepancy in the gradient. Fig.
4 shows the accuracy comparisons on the linear evaluation stage with SGD and Adamtrained backbones in selfsupervised learning. It can be observed that, with SGD training, the accuracy decreases when learning rates become smaller. This tendency is consistent with the realvalued model. While with Adam, the accuracy increases dramatically when using smaller learning rates, and the best final accuracy is much higher than the best result from SGD.4 Data Augmentation Adjustments
Our data augmentation strategies mainly inherit from the baseline method MoCo V2 [chen2020improved]. In realvalued networks, heavier augmentations have been proven useful in most cases of contrastive based selfsupervised learning. However, considering the limited capability of BNNs to distinguish the same class from different shapes of images, instead of involving more data augmentations, we decrease the transformations’ probabilities of ColorJitter and GaussianBlur
to facilitate the difficulty for BNNs to classify the two images in the same class. Intriguingly, this lite data augmentation strategy can bring an additional
1.0% improvement on ImageNet. This reflects that the properties of BNNs are basically different from the realvalued networks, thus the configurations are required to reconsider, and it also demonstrates the value of this study on exploring selfsupervised BNNs. More details are provided in Sec. 6.1.5 Our Approach
Our roadmap of this paper has three main stages: Firstly, we follow the realvalued selfsupervised method with contrastive loss whereas replacing particular configurations to fit the properties of BNNs, such as optimizer, data augmentation, learning rate, etc. These strategies can produce 5.5% improvement over vanilla MoCo V2 baseline. Then, we propose to adopt an additional guided learning method to enforce representations of BNNs to be similar to the realvalued reference network. This simple strategy can bring an additional about 4% improvement. Lastly, we remove contrastive loss and solely optimize BNNs with the guided learning paradigm and the performance is further increased by 5.5%. Several motivations and insights of our proposed method are discussed in the following sections.
5.1 Preliminaries
BNNs aim to learn networks that both weights and activations are with discrete values in {1, +1}. In the forward propagation of training, the realvalued activations will be binarized by the sign function: where is the realvalued activation of the previous layers calculated from the binary or realvalued convolutional operations. is the binarized activation. The realvalued weights in the model will be binarized through: where is the realvalued weights that are maintained as latent parameters to accumulate the tiny gradients, is #weight in each channel. is the weights after binarization. The binary weights will be updated through multiplying the sign of latent realvalued weights and the channelwise norm (). The gradient is calculated with respect to binary weights : where is #iteration.
Training BNNs is a challenging task since the gradient for optimizing parameters in the network is approximated and the capacity of models for memorizing all data distributions is also limited. It is thus worthwhile to discuss that as the sign function has a bounded range, the approximation to the derivative of the sign operation will suffer from a zero or vanished gradient issue when the activations exceed the effective gradient range, i.e., .
5.2 RealValue Guided Distillation
Selfsupervised Contrastive Loss. The conventional contrastive learning uses a standard logsoftmax function to apply one positive sample out of negative samples and it predicts the probability of data distribution as:
(1) 
where are two random “views” of the same image under random data augmentation.
is the cosine similarity or other matching function for measuring the similarity of two representations.
is a temperature hyperparameter.Guided with KLdivergence Loss.
KLdivergence loss is used to measure the degree of how one probability distribution is different from another reference one. We train the BNNs
by minimizing the KLdivergence between its output and the representationgenerated by a selfsupervised realvalued reference model. The loss function can be formulated as:
(2) 
where is the number of samples. is a temperature hyperparameter. Note that the data augmentation strategy should be the same for both binary and realvalued models. In practice, we only optimize with crossentropy loss as:
(3) 
which is equivalent to following MEAL V2 [shen2020meal].
5.3 Progressive Binarization
As illustrated in Fig. 5, there are many differences on activation distributions between binary and realvalued networks across middle and highlevel layers, and realvalued activations always contain more finegrained details and semantic information of instance and background on representation distributions. As our purpose is to recover the distributions from realvalued networks to binary networks, we propose to adopt a multistep binarization procedure. The motivation behind this design is straightforward, as shown in Fig. 7, directly recovering distribution from realvalued networks to binary networks is challenging, to facilitate the difficulty of optimization, we first keep partial parameters or weights in the target model to be realvalued, and then binarize them progressively. This strategy is somewhat similar to [martinez2020training, liu2020reactnet], while we emphasize that all these previous studies lie in supervised learning, here our objective is a selfsupervised contrastive loss or distillation loss. Hence the learning procedure and hyperparameter design are entirely different from prior works.
In our method, the initial status is a complete realvalued network, the intermediate status is a partially binarized network with realvalued weights and binary activations, as shown in Fig. 6. We train such a network first to obtain the realvalued parameters, then we reuse these pretrained parameters in the final completely binary models. Since the binarization is modeled by the sign function during training, hence the binary models can inherit the realvalued parameters as the initialization. This study shows that such a multistep binarization can facilitate optimization for selfsupervised training and obtain significant improvement.
Weight Decay Strategy. Weight decay is a widely used technique for preventing networks from overfitting. We observe that it is necessary to adopt an appropriate weight decay strategy in different steps of our multistep training. Since the intermediate status is only a transitional phase, the existing realvalued weights can make model’s capacity larger than the final completelybinary status, thus the weight decay is not employed in the first step (or choosing a smaller value of weight decay in this phase). For the second step, weight decay is adopted to avoid overfitting.
5.4 What Happens If Removing Contrastive Loss
Since adding guided learning term brings a substantial improvement, we are curious whether guided learning is capable of learning good representations solely. Our observation is surprising, removing contrastive loss gives an additional 5.5% improvement. We conjecture this is because contrastive and guided learning are basically optimizing with different directions. Guided learning is mimicking the realvalued highquality representation and recovering the knowledge stored in it, if the reference model is strong enough, the target BNNs can be extremely wellperforming, while contrastive loss learns from data itself which explores different patterns (e.g., instance discrimination, colorization, etc.) from guided learning. Therefore, in this work we study the following three schemes, as shown in Fig. 8:
①: Enhanced baseline of contrastive learning.
②: Contrastive + guided learning (distillation).
③: Guided learning (distillation) only.
Where is the selfsupervised realvalued network from? There are two ways to obtain the realvalued reference network: (1) online training with the target BNN simultaneously; (2) offline pretraining. As shown in Fig. 9, if we train “stage 1” and “stage 2” together, this is the online training scheme and the realvalued network will be optimized together with binary network. However, the learning cost will increase significantly since in each individual run we have to include an additional realvalued branch. A simpler and more efficient way is to train “stage 1” offline and in advance, then reuse it for all experiments. The offline strategy is utilized in this work of all our experiments. Based on the perspective of MEAL V2 [shen2020meal] that better teachers usually distill better students, we choose MoCo V2 pretrained realvalued ResNet50 network as our strong teacher model.
6 Experiments
In this section, we first introduce the datasets we used and implementation details for selfsupervised pretraining, linear evaluation and transfer learning. Then, we provide extensive ablation studies for each component of our method. Following that, we show our main and transfer results. Lastly, we illustrate some visualizations to further demonstrate the effectiveness of our method.
6.1 Datasets and Implementation Details
Datasets. Our experiments are conducted on the widelyused largescale ImageNet 2012 dataset [deng2009imagenet], which contains 1,000 classes with a total number of 1.2 million training images and 50,000 images for validation. For transfer learning, we use PASCAL VOC2007 [everingham2010pascal], CUB2002011 [wah2011caltech], Birdsnap [berg2014birdsnap]
and CIFAR10/100
[krizhevsky2009learning] benchmarks.Data Augmentation. As mentioned above, our basic data augmentation follows MoCo V2 [chen2020improved] with no additional operations, but we reduce the probability of ColorJitter from 0.8 to 0.6 and GaussianBlur from 0.5 to 0.2. We apply this lite data augmentation strategy to all of our experiments.
Selfsupervised Pretraining. We adopt MoCo V2 [chen2020improved] as the baseline selfsupervised method. For our distillation solution, we use none of momentum update, shuffling BN, memory bank (negative pairs) and contrastive loss. The initial learning rates are 3 for SGD following [chen2020improved] and 3 for Adam, and will be reduced with a linear decay through lr = (initial lr)
(1  epoch / total_epoch).
is set to 0.2 for both contrastive and distillation losses. If no otherwise specified, all networks are trained with 200 epochs.Linear Evaluation. We freeze all the parameters in the backbone and train a supervised linear classifier using the conventional selfsupervised evaluation protocol [chen2020improved, chen2020simple, shen2020rethinking]. We train with 100 epochs and all other hyperparameters are following the baseline method [chen2020improved].
Transfer Learning.
We finetune the entire network using the weights of our learned models as initializations. We train for 180 epochs with a batch size of 128 and an initial learning rate of 0.01. For PASCAL VOC multiobject classification, we adopt sigmoid crossentropy instead of softmax one. We use SGD with a momentum parameter of 0.9 and weight decay of 0.0001. We perform standard random crops with resize and flips as data augmentation during finetuning. The training image size is 224
224. At test time, we resize images to 256 pixels and take a 224224 center crop. When freezing backbone, we solely train the last linear layer as the standard linear evaluation protocol.6.2 Ablation Studies
Optimizers. We study the standard SGD and adaptive optimizer Adam in the pretraining stage. Our results in Fig. 4 shows that Adam can bring about 2.8% improvement.
Learning Rate Scheduler. Here the learning rate scheduler indicates in the linear evaluation stage. In the training stage, we use a uniform value of 3 as presented above. The results are shown in Table 2, we provide results of our three schemes on the lr range from 30 to 0.05. It can be observed 0.1 is the optimal choice for the Adam optimized models.
lr  ①  ②  ③ 

30  44.248  47.918  54.518 
20  44.454  48.570  54.986 
10  45.662  50.054  56.050 
5  47.942  51.808  57.868 
1  49.702  53.324  59.838 
0.5  49.914  53.468  59.968 
0.1  49.870  53.484  60.418 
0.05  49.228  52.926  60.304 
Data Augmentation Effects. Using our proposed lite data augmentation strategy, the Top1/5 results are neatly improved from 49.402/73.152 to 50.410/73.968 on ImageNet.
①  ②  ③  

Complete in one step  49.914  53.484  60.418 
Multistep binarization  52.452  56.022  61.506 
Binary Methods  #Epoch  BOPs  FLOPs  OPs  Acc (%)  
()  ()  ()  Top1  
Supervised Learning:  
BNNs [courbariaux2016binarized]  –  1.70  1.20  1.47  42.2  
XNORNet [rastegari2016xnor]  –  1.70  1.41  1.67  51.2  
MobiNet [phan2020mobinet]  –  –  –  0.52  54.4  
BiRealNet18 [liu2018bi]  –  1.68  1.39  1.63  56.4  
PCNN [gu2019projection]  –  –  –  1.63  57.3  
CIBCNN [wang2019learning]  –  –  –  1.63  59.9  
Binary MobileNet [phan2020binarizing]  –  –  –  1.54  60.9  
RealtoBinary [martinez2020training]  –  1.68  1.56  1.83  65.4  
MeliusNet29 [bethge2020meliusnet]  –  5.47  1.29  2.14  65.8  
ReActNet [liu2020reactnet]  –  4.82  0.12  0.87  69.4  
SelfSupervised Learning:  
MoCo V2 [chen2020improved] (baseline)  200  4.82  0.12  0.87  46.9  
Ours  ①  200  4.82  0.12  0.87  52.5 
②  200  4.82  0.12  0.87  56.0  
③  200  4.82  0.12  0.87  61.5 
Multistep Binarization. Our results are shown in Table 3, the proposed multistep strategy generally obtains better accuracy. Whereas, the improvement seems to decrease when the base performance becomes higher, i.e., from ① to ③.
Different Architectures and Strategies. The results with different backbones are shown in Table 1, we choose XNOR Net [rastegari2016xnor], BiReal Net [liu2018bi] and ReActNet [liu2020reactnet] as our backbones for this ablation study. It can be seen that we obtain substantial improvement over all of these architectures.
VOC2007  CUB2002011  Birdsnap  CIFAR10  CIFAR100  
From Scratch (Realvalued)  72.7  29.8  46.2  93.1  70.9 
From Scratch (Binary)  50.0  –  –  65.9  37.2 
Finetune:  
MoCo V2 Realvalued (baseline 1)  89.6  67.3  63.6  95.3  79.3 
MoCo V2 Binary (baseline 2)  81.0  34.4  34.0  89.9  69.5 
Ours (①)  82.3  38.2  38.0  91.5  71.9 
Ours (②)  83.5  40.5  39.2  91.3  72.3 
Ours (③)  86.9  50.1  45.7  92.7  74.3 
Freeze backbone:  
MoCo V2 Realvalued (baseline 1)  86.5  51.5  22.8  86.9  60.7 
MoCo V2 Binary (baseline 2)  79.8  23.3  20.3  79.3  56.7 
Ours (①)  81.7  33.1  21.9  80.4  58.7 
Ours (②)  83.1  38.4  25.6  80.7  58.8 
Ours (③)  86.4  47.5  34.1  82.7  61.9 
6.3 Main Results
A summary of our main results is shown in Table 4, we adopt ReActNet as our backbone network. Comparing to the selfsupervised baseline MoCo V2, our method outperforms it by 14.6% with the same training epochs. Promisingly, it can be observed that our results are even comparable to some recently proposed supervised methods, like BiRealNet18 [liu2018bi], CIBCNN [wang2019learning] while only containing about 1/2 OPs to them. The results demonstrate the great potential of our selfsupervised BNN method on realworld applications where annotation and memory are both scarce.
Visualization. To better also radically understand where the promotion comes from in our distillation method, we further visualize the activation maps of contrastive and guided learned models at the same level of layers. As shown in Fig. 10, in each group, we visualize the first 64 channels in those layers. Visually, it can be recognized that the quality of activation maps from contrastive learning to guided learning is improved obviously with more details, and guided results are more close to the realvalued ones.
More Training Epochs. In realvalued scenario of selfsupervised learning, more training budget always obtains a significant improvement. For example, SwAV [caron2020unsupervised] achieves 0.7% gain when training from 200 to 400 epochs. However, we also train our model with 400 epochs but we found the improvement is marginal (from 61.5% to 61.8%). We conjecture the reason is that our distillation based framework utilizes neither positive nor negative pairs, the binary student basically recovers the teacher’s capability, so it is bounded by teacher’s ability rather than the training budget.
Training Cost Analysis. Compared to the selfsupervised baseline method, our main extra training cost is the learning procedure of generating the selfsupervised realvalued model. As we adopt the offline strategy in our framework, we only need to train it once, hence if not considering this pretraining process, our total computational cost is nearly the same as the baseline MoCo V2.
Why Solely Using Distillation Loss is Better Than Combining with Contrastive Loss for Selfsupervised BNNs. Intuitively, distillation loss forces BNNs to mimic the reference network’s predictive probability, while contrastive learning tends to discover the latent patterns from the data itself. Our Fig. 10 evidences that in binary scenario, contrastive loss is relatively weaker than distillation loss to learn finegrained representations, and the semantics are also vague. Combining both of them may not be an optimal solution due to the discrepancy of the optimization spaces.
6.4 Transfer Learning
It is critical to further verify the transferability of our learned parameters from different learning schemes. We follow the conventional selfsupervised finetuning evaluation protocol for this study. The summary of our transfer results are provided in Table 5, all the network structures in this table are MobileNetlike ReActNet. Our results of ① can be regarded as the stronger contrastive baseline. “From scratch” denotes we train networks with the randomly initialized parameters and we show them here for the reference purpose. Generally, our transfer results are consistent with their linear evaluation performance on ImageNet. In particular, the improvement from ② to ③ is dramatically higher than that from ① to ② crossing different datasets. Moreover, we observe our best result is even close to the selfsupervised realvalued baseline.
7 Conclusion
It is worthwhile considering how to train a robust and accurate selfsupervised binary network. In this work, we have summarized and explained several behaviors observed while training such networks without labels. We focused on how optimizer, learning rate scheduler and data augmentation encourage representations and affect the performance in building a base BNN framework. We further proposed a guided learning paradigm enabled by a realvalued reference network to distill the target binary network training, and exposed such learning strategy can obtain better results comparing to both the contrastive learning and even supervised learning BNNs scheme. We attribute the proposed superior training scheme to its ability of mimicking the high quality of the reference network’s representation. Finally, we performed extensive ablation experiments on each component of our method. The details of design and implementation always have big impacts on final performance. Moreover, our trained parameters can be crucial for many downstream tasks that depend on a good representation, such as finegrained recognition, multiobject classification, etc.