It is commonsense that how you look at an object does not change its identity. Nonetheless, Jorge Luis Borges imagined the alternative. In his short story on Funes the Memorious, the titular character becomes bothered that a “dog at three fourteen (seen from the side) should have the same name as the dog at three fifteen (seen from the front)” . The curse of Funes is that he has a perfect memory, and every new way he looks at the world reveals a percept minutely distinct from anything he has seen before. He cannot collate the disparate experiences.
Most of us, fortunately, do not suffer from this curse. We build mental representations of identity that discard nuisances like time of day and viewing angle. The ability to build up view-invariant representations is central to a rich body of research on multiview learning. These methods seek representations of the world that are invariant to a family of viewing conditions. Currently, a popular paradigm is contrastive multiview learning, where two views of the same scene are brought together in representation space, and two views of different scenes are pushed apart.
This is a natural and powerful idea but it leaves open an important question: “which viewing conditions should we be invariant to?” It’s possible to go too far: if our task is to classify the time of day then we certainly should not use a representation that is invariant to time. Or, like Funes, we could go not far enough: representing each specific viewing angle independently would cripple our ability to track a dog as it moves about a scene.
We therefore seek representations with enough invariance to be robust to inconsequential variations but not so much as to discard information required by downstream tasks. In contrastive learning, the choice of “views” is what controls the information the representation captures, as the framework results in representations that focus on the shared information between views . Views are commonly different sensory signals, like photos and sounds , or different image channels  or slices in time 
, but may also be different “augmented” versions of the same data tensor. If the shared information is small, then the learned representation can discard more information about the input and achieve a greater degree of invariance against nuisance variables. How can we find the right balance of views that share just the information we need, no more and no less?
We investigate this question in two ways. First, we demonstrate that the optimal choice of views depends critically on the downstream task. If you know the task, it is often possible to design effective views. Second, we empirically demonstrate that for many common ways of generating views, there is a sweet spot in terms of downstream performance where the mutual information (MI) between views is neither too high nor too low.
Our analysis suggests an “InfoMin principle”. A good set of views are those that share the minimal information necessary to perform well at the downstream task. This idea is related to the idea of minimal sufficient statistics  and the Information Bottleneck theory [67, 2], which have been previously articulated in the representation learning literature. This principle also complements the already popular “InfoMax principle”  , which states that a goal in representation learning is to capture as much information as possible about the stimulus. We argue that maximizing information is only useful in so far as that information is task-relevant. Beyond that point, learning representations that throw out information about nuisance variables is preferable as it can improve generalization and decrease sample complexity on downstream tasks .
Based on our findings, we also introduce a semi-supervised method to learn views that are effective for learning good representations when the downstream task is known. We additionally demonstrate that the InfoMin principle can be practically applied by simply seeking stronger data augmentation to further reduce mutual information toward the sweet spot. This effort results in state of the art accuracy on a standard benchmark.
Our contributions include:
Demonstrating that optimal views for contrastive representation learning are task-dependent.
Empirically finding a U-shaped relationship between an estimate of mutual information and representation quality in a variety of settings.
A new semi-supervised method to learn effective views for a given task.
Applying our understanding to achieve state of the art accuracy of on the ImageNet linear readout benchmark with a ResNet-50.
2 Related Work
Learning high-level representations of data that can be used to predict labels of interest is a well-studied problem in machine learning. In recent years, the most competitive methods for learning representations without labels have been self-supervised contrastive representation learning [50, 30, 73, 65, 61, 8]
. These methods use neural networks to learn a low-dimensional embedding of data by a “contrastive” loss which pushes apart dissimilar data pairs while pulling together similar pairs, an idea similar to exemplar learning. Models based on contrastive losses have significantly outperformed other approaches based on generative models, smoothness regularization, dense prediction [78, 37, 52, 65], and adversarial losses .
The core idea of contrastive representation learning is to learn a function (modeled by a deep network) that maps semantically nearby points (positive pairs) closer together in the embedding space, while pushing apart points that are dissimilar (negative pairs). One of the major design choices in contrastive learning is how to select the positive and negative pairs. For example, given a dataset of i.i.d. images, how can we synthesize positive and negative pairs?
The standard approach for generating positive pairs without additional annotations is to create multiple views of each datapoint. For example: splitting an image into luminance and chrominance , applying different random crops and data augmentations [73, 8, 4, 26, 75, 62], pasting an object into different backgrounds , using different timesteps within a video sequence [50, 80, 57, 25, 24], or using different patches within a single image [32, 50, 30]. Negative pairs can be generated by using views that come from randomly chosen images/patches/videos. In this work, we provide experimental evidence and analysis that can be used to guide the selection and learning of views.
Theoretically, we can think of the positive pairs as coming from a joint distribution over views:, and the negative pairs as coming from a product of marginal distributions . For the most popular contrastive loss, InfoNCE , the objective is then a lower bound on the mutual information between the two views: . This connection between contrastive learning and mutual information maximization was first made in CPC  and is discussed further in . However, recent work has called into question the interpretation of the success of the InfoNCE contrastive loss as information maximization , instead arguing that success is due to geometric requirements on the embedding space. Furthermore, theoretical and experimental work has highlighted that estimating mutual information in high dimensions is challenging and empirical work has shown that the InfoNCE and other bounds used in practice can be quite loose [43, 55, 51].
Leveraging labeled data in contrastive representation learning has been shown to guide representations towards task-relevant features that improve performance [76, 29, 4, 34, 72]. Here we leverage labeled data only to learn better views, and still perform contrastive representation learning using unlabeled data only. Future work could combine these approaches to leverage labels for both view learning and representation learning.
Recent work has begun to study and address the question we study here: what views lead to improved downstream accuracy? In , compositions of data augmentations were investigated for their effectiveness. Most similar to our work, a recent unpublished tech report  presents several desiderata for views in contrastive representation learning similar to our discussion of suffiiency and minimality, and presents new bounds on MI for alternative negative sampling schemes.
3 Preliminary: Contrastive Representation Learning
Let us consider the case when we are given two random variablesand , and we wish to learn a parametric function to discriminate between samples from the empirical joint distribution and samples from the product of marginals . The resulting function is an estimator of the mutual information between and , and the InfoNCE loss  has been shown to maximize a lower bound on . In practice, given an anchor point , the InfoNCE loss is optimized to score the correct positive higher compared to a set of distractors :
The score function typically consists of two encoders ( and ) and a critic head . The two encoders and may or may not share parameters depending on whether and are from the same domain. Minimizing the above InfoNCE loss is equivalent to maximizing a lower bound (known as ) on the mutual information between and [50, 30], i.e.,
In practice, and are two views of the data. Despite engineering tricks, recent contrastive representation learning methods can be considered as different ways to construct and : (a) InsDis , AMDIM , MoCo  and SimCLR  apply data augmentation to the same image to obtain two crops as views; (b) CMC  employs natural image channels; (c) CPC  and CPCv2  leverages spatial or temporal co-occurrence; (d) In the video domain, [64, 44, 39] use a video as and aligned texts as , while [47, 10, 53] leverages the correspondence between video and audio; (e)  even considers representations from teacher and student networks as and , and performs contrastive knowledge distillation.
4 What Are the Optimal Views for Contrastive Learning?
Given two views and of the data, the encoders and in the contrastive learning framework extract representations and , respectively.
(Sufficient Encoder) The encoder of is sufficient in the contrastive learning framework if and only if .
Intuitively, the encoder is sufficient if the amount of information in regarding the contrastive objective is lossless during the encoding procedure. In other words, the representation has kept all the information about in , and therefore is as useful as . Symmetrically, we can also say is sufficient if
(Minimal Sufficient Encoder) A sufficient encoder of is minimal if and only if that are sufficient.
Among those encoders which are sufficient, the minimal ones only extract relevant information of the contrastive task and will throw away other information. This is appealing in cases where the views are constructed in such a way that all the information we care about is shared between them.
The representations learned in the contrastive framework are typically used in a different downstream task. To characterize what representations are good for a downstream task, we define the optimality of representations. To make notation simple, we use to mean it can be either or . (Optimal Representation of a Task) For a task whose goal is to predict a semantic label from the input data , the optimal representation encoded from is the minimal sufficient statistic with respect to .
This says a model built on top of has all the information necessary to predict as accurately as if it were to access . Furthermore, maintains the smallest complexity, i.e., containing no other information besides that about , which makes it more generalizable . We refer the reader to  for a more in depth discussion about optimal visual representations and minimal sufficient statistics.
4.1 InfoMin Principle: Views that Only Share Label Information
While views and can be arbitrarily constructed or selected from the data as we like, the effectiveness of views may vary in different tasks. For instance, views that share object position as mutual information should lead to better object localization performance in a downstream task compared to views that only share background nuisances. A general intuition is that good views should only share information that is relevant to a target downstream task. Then given these good input views, the minimal sufficient encoders will extract latent representation that only store information shared between them, which is task-relevant. The following InfoMin proposition articulates which views are optimal supposing that we know the specific downstream task in advance.
Suppose and in the contrastive learning framework are the minimal sufficient encoders. Given a downstream task associated with label , then the optimal views created from the complete data tensor would be , subject to . Given , the learned representations (or ) is optimal for (Definition 4), thanks to the minimality and sufficiency of and .
This InfoMin principle has two implications. First, we should reduce mutual information between views. By doing so, the minimal sufficient encoders will throw away more irrelevant information (or nuisance factors) to the downstream task. In practice, “shortcuts” are one type of task-irrelevant nuisance and should be removed from the views . Furthermore, bits of information that are useful for a different downstream task may turn into nuisances. Second, the constraints suggest that we should retain the predictability of from in both and , such that the representations capture the semantics of downstream task . This constraint dissolves the potential impression that we should make the contrastive task as hard as possible to obtain better representations. For a proof of this proposition, please refer to the Appendix.
An example of creating optimal views following this principle, in an image classification task, is to treat images from the same class as congruent pairs and images from different classes as incongruent pairs. In this way, congruent pairs of views only share label information. Recently in , such optimal views have been leveraged for supervised contrastive learning, outperforming supervised models trained with cross-entropy loss on ImageNet .
4.2 A Toy Example: Colorful Moving-MNIST
Directly analyzing natural images can be challenging as it is hard to create interesting views whose factors of variation are controllable. Therefore, we use a toy dataset as a starting point to understand the behavior of contrastive representation learning with different views. Moving-MNIST  consists of videos where digits move inside a canvas with constant speed and bounce off of image boundaries. To simulate a more complex dataset with nuisance factors of variation we construct Colorful-Moving-MNIST by adding a background image to the Moving-MNIST videos. Concretely, given a video, a random image from STL-10 dataset  is selected, and then for each frame of the video we randomly crop a patch from this image as background. Thus the dataset consist of three factors of variation in each frame: the class of the digit, the position of the digit, and the class of background image.
Setup. While there are many ways to construct views, our goal is to analyze a set of easily reproducible experiments. To this end, we fix the first view as a sequence of past frames , and construct different views . For simplicity, we consider as a single image. One example of constructing such views is shown in Fig. 2 where we use a future frame of the video as . During the contrastive learning phase, we employ a 4-layer ConvNet to encode images and use a single layer LSTM  on top of the ConvNet to aggregate features of continuous frames. After the contrastive pre-training phase, to read off what information has been encoded in the representation, we consider three different downstream tasks for an image: (1) predict the digit class; (2) localize the digit inside the canvas; (3) classify the background into one of the 10 classes of STL-10. In this transfer phase, we freeze the backbone network and learn a task-specific head. To facilitate comparison, we also provide a “supervised” baseline that is trained end-to-end using the same data.
|bkgd, digit, pos||88.8||56.3||16.2|
Single Factor Shared. To begin with, we create views and that only share one of the three factors: digit, position, or background. For a video with frames, we set (note can deterministically predict ), and create another image as by setting one of the three factors the same as while randomly picking the other two. In such cases, can only deterministically predict one of the three factors in while never reduce the uncertainty of the other two factors. In other words, the mutual information is either digit, position, or background
. We separately train encoders using contrastive learning for each of the three scenarios, and the perform transfer learning in three downstream tasks, as shown in Table1. These results clearly show that the performance of downstream tasks is significantly affected by the way we construct , which determines . Specifically, if the downstream task is relevant to one factor, we should let include this factor rather than others. For example, when only shares background image with , contrastive learning can hardly learn representations that capture digit class and location. It is expected since the information of digit class and location is of no use to the contrastive pre-training objective, and thus will not be captured.
Multiple Factors Shared. A more interesting question is how representation quality is changed if and share multiple factors. We follow a similar procedure as above to control factors shared by and , and present the results in the second half of Table 1. We found that one factor can overwhelm another; for instance, whenever background is shared by the two views, the latent representation leaves out information for discriminating or localizing digits. This might be because the information bits of background easily dominate bits of the digit class and its position, and the network chooses the background as a “shortcut” to solve the contrastive pre-training task. When and share digit and position, we found two interesting observations: (1) digit dominates position as the digit localization task still performs poorly; (2) sharing position information benefits digit classification – error rate v.s. when only digit is shared. The former might not be a surprise as ConvNets are designed to be insensitive to position shift. For the latter, we conjecture that in practice the encoder is not sufficient so it will lose some bits of information about the digits, and knowing the position of the digits help it capture more bits about the digit class.
5 A Sweet Spot in Mutual Information: Reverse-U Shape
As suggested in Proposition 4.1, to obtain good performance on the downstream task, we should reduce the mutual information between views while retaining the task-relevant semantics. In other words, we should remove task-irrelevant information between views. In this section, we will first discuss a hypothesis for effect of on downstream transfer performance, and then empirically analyze three cases of reducing in practice.
5.1 Three Regimes of Information Captured
As both views are generated from the input , we can constrain the information between views by constraining how much information about is present in each view: and . Due to the data processing inequality, the information shared between views is bounded by the information contained about the input in each view: . As our representations are built from our views and learned by the contrastive objective with minimal sufficient encoders, the amount and type of shared between and (i.e., ) determines how well we perform on downstream tasks.
As in information bottleneck , we can trace out a tradeoff between how much information our views share about the input, and how well our learned representation performs at predicting a task-relevant variable . Depending on how our views are constructed, we may find that we are keeping too many irrelevant variables while discarding relevant variables (as in Figure. 1c), leading to suboptimal performance on the information plane. Alternatively, we can find the views that maximize and (how much information is contained about the task-relevant variable) while minimizing (how much information is shared about the input, including both task-relevant and irrelevant information). Even in the case of these optimal views, there are three regimes of performance we can consider that are depicted in Figure 3, and have been discussed previously in information bottleneck literature [67, 2, 21] :
missing information: When , there is information about the task-relevant variable that is discarded by the view, degrading performance.
sweet spot: When , the only information shared between and is task-relevant, and there is no irrelevant noise.
We hypothesize that the best performing views will be close to the sweet spot: containing as much task-relevant information while discarding as much irrelevant information in the input as possible. Unlike in information bottleneck, for contrastive representation learning we often do not have access to a fully-labeled training set, and thus evaluating how much information about the task-relevant variable is contained in the representation at training time is challenging. Instead, the construction of views has typically been guided by domain knowledge that alters the input while preserving the task-relevant variable.
The above analysis suggests that transfer performance will be upper-bounded by a reverse-U shaped curve (Figure 3, right), with the sweet spot at the top of the curve. We next present a series of experiments that find such a curve in practical settings.
5.2 Practical Cases of Reducing
Often, the downstream task is unknown beforehand, and we may not know in which ways we should reduce to create views suitable for various downstream tasks. It is possible that both task-relevant signals and nuisance factors are reduced simultaneously, and therefore we do not have a guarantee on performance. But recently,  found a “reverse-U” shape phenomenon: reducing firstly leads to improved performance on the downstream task, then after a peak further decreasing causes performance degradation. This finding is in line with Hypothesis LABEL:as:learner: at the beginning of reducing , nuisance factors – the noise – are removed while most of the useful semantics – the signal – are preserved. Indeed this may be common in practice: data augmentation, of the proper magnitude, reduces but improves accuracy. Other than data augmentation, we show three examples where reduced leads to improved performance. We use as neural proxy for . Though it might not estimate accurately, it can still provide interesting analysis. We note here that within each plot with in this paper, we only vary the input views , but keep all other settings fixed, in order to make the plotted points directly comparable. For more implementation details, please refer to the appendix.
5.2.1 Reducing with Spatial Distance
We create views by randomly cropping two patches of size 64x64 from the same image with fixed relative position. Namely, one patch starts at position while the other starts at , with being randomly generated. We increase from to , sampling from inside high resolution images (e.g., 2k pixels in each dimension) from the DIV2K dataset . After contrastive training, we evaluate on STL-10 and CIFAR-10 by freezing the encoder and training a linear classification layer. The plots in Figure 4 shows the Mutual Information v.s. Accuracy. From natural image statistics we can expect that decreases when increases . The plots demonstrate that this is also true, empirically, for the proxy , as decreases with increasin . Moreover, the results show replicate the “reverse-U” curve found in , and here we further show that this phenomenon is consistent across both STL-10 and CIFAR-10. We can identify the sweet spot at .
5.2.2 Reducing with Different Color Spaces
The correlation between channels may vary significantly with different color spaces. Here we follow [65, 78] and split different color spaces into two views, such as , etc. We perform contrastive learning on STL-10, and measure the representation quality by training a linear classifier on the STL-10 dataset to perform image classification or a decoder head on NYU-V2  images to perform semantic segmentation. As shown in Figure 5, the plots show downstream performance keeps increasing as decreases for both classification and segmentation. Here we do not observe any performance drop with these natural color spaces when reducing . In Section 6, we will show a learning method for new color space, which can further reduces to find the sweet spot, and go beyond it, where transfer performance begins again to drop.
5.2.3 Reducing with Frequency Separation
Another example we consider is to separate images into low- and high-frequency images. To simplify, we extract and by Gaussian blur, i.e.,
where Blur is the Gaussian blur function and is the parameter controlling the kernel. Extremely small or large can make the high- or low-frequency image contain little information. In theory, the maximal is obtained with some intermediate . As shown in Figure 6, we found leads to the maximal on the STL-10 dataset. Either blurring more or less will reduce , but interestingly blurring more leads to different trajectory in the plot than blurring less. When increasing from 0.7, the accuracy firstly improves and then drops, forming a reverse-U shape with a sweet spot at . This situation corresponds to (b) in Figure 1. While decreasing from 0.7, the accuracy keeps diminishing, corresponding to (c) in Figure 1. This reminds us of the two aspects in Proposition 4.1: mutual information is not the whole story; what information is shared between the two views also matters.
6 Synthesizing Effective Views
Can we learn novel views that reach the sweet spot by following the InfoMin principle? To explore this possibility, we design unsupervised and semi-supervised frameworks that learn novel views inspired by the InfoMin principle. Concretely, we extend the color space experiments in Section 5.2.2 by learning flow-based models that transfer natural color spaces into novel neural color spaces, from which we split channels to get views. After the views have been learned, we will do standard contrastive learning followed by linear classifier evaluation. In this section, we consider three methods: (1) random view generation which varies by constructing views with randomly initialized networks; (2) unsupervised view learning which reduces ; and (3) semi-supervised view learning which reduces while preserving task-relevant information. The idea of view learning is diagrammed in Figure 7.
6.1 Random Views
Flow-based generative models [16, 15, 36] are carefully-designed bijective functions between input images and latent space. Here we leverage this property to create random color spaces that are bijective and preserve total information. To do so, we restrict to be a pixel-wise (i.e., use 1x1 convolutions) flow. With a randomly initialized , an input image, , is transformed into , which has the same size as . We then split over the channel dimension as two views for contrastive learning. By comparing with , we found that is typically increased after the transformation and the downstream accuracy drops, as shown in (a) of Figure 8.
Is it MI between views or the inductive bias of that plays the role for above spectrum plot? To check, we use to transform only half of the input, e.g., . Theoretically, the MI between views is identical to . In accord with theory, we found this is also true for the NCE estimation , as shown in (b) of Figure 8. Interestingly the classification accuracy of the downstream task using is almost the same as .
6.2 Unsupervised: Minimize
The idea is to leverage an adversarial training strategy . Given an input image , we transform it into . We train two encoders on top of and to maximize , similar to the discriminator in GAN . Meanwhile, is adversarially trained to minimize . Formally, the objective is (shown as the bottom-left yellow box in Figure 7):
where correspond to in Eqn. 1. Alternatively, one may use other MI bounds . In practice, we find works well and keep using it for simplicity. We note that the invertible and pixel-wise properties of prevent it from learning degenerate/trivial solutions.
Implementation. This experiment is performed on STL-10. We try both volume preserving (VP) and non-volume preserving (NVP) flow models. and consist of x convolutional layers. For and , we use an AlexNet-style network. We experiment with two input color spaces: RGB and YDbDr. The former is the most widely used one, while the latter is the best for contrastive learning, as shown in Figure 5. Results. We plot the between the learned views and the corresponding linear evaluation performance. As shown in Figure 9(a), a reverse U-shape between and downstream accuracy is presented. Interestingly, YDbDr color space is already around the sweet spot, and further reducing results in a performance drop. This happens to be in line with our human prior that the “luminance-chrominance” decomposition is a good way to decorrelate colors but still maintains good interpretability (in the sense that we can still read out high level semantics to perform tasks). We also note that the Lab color space, which is another luminance-chrominance decomposition (and performs similarly well to YDbDr; Figure 5), was designed to mimic the way humans perceive color . Our analysis therefore suggests yet another rational explanation for why humans perceive color the way we do – human perception of color may be near optimal for self-supervised representation learning.
Occasionally, one color space learned from RGB happens to touch the sweet spot, but in general the between views is overly decreased. The reverse-U shape trend holds for both NVP and VP models. In addition, we found this GAN-style training is unstable, as different runs with the same hyper-parameter can vary a lot. We conjecture that while reducing MI between views in such an unsupervised manner, the view generator has no knowledge about task-relevant semantics and thus construct views that do not share sufficient information about the label , i.e., the constraint in Proposition 4.1 is not satisfied. To overcome this, we further develop an semi-supervised view learning method.
6.3 Semi-supervised View Learning: Find Views that Share the Label Information
We assume a handful of labels for the downstream task are available. Therefore we can teach the view generator to retain and as much as possible. Put into practice, we introduce two classifiers on each of the learned views to perform classification during the view learning process. Formally, we optimize (shown as the three yellow boxes in Figure 7):
where represent the classifiers. The term applies to all data while the latter two are only for labeled data. After this process is done, we use to generate views for contrastive representation learning.
|unsupervised||82.4 3.2||84.3 0.5|
|supervised||79.9 1.5||78.5 2.3|
|semi-supervised||86.0 0.6||87.0 0.3|
|raw input||81.5 0.2||86.6 0.2|
Results. The plots are shown in Figure 9(b). Now the learned views are clustered around the sweet spot, no matter what the input color space was and whether the generator is VP or NVP, which highlights the importance of keeping information about . Meanwhile, to see the importance of the unsupervised term, which reduces , we train another view generator by just the supervised loss. We further compare “supervised”, “unsupervised” and “semi-supervised” (the supervised + unsupervised losses) generators in Table 3, where we also includes contrastive learning over the original color space (“raw input”) as a baseline. The semi-supervised view generator significantly outperforms the supervised one, verifying the importance of reducing . We compare the “semi-supervised” views of with the original ( is RGB or YDbDr) on larger backbone networks, as shown in Table 3. We see that the learned views consistently outperform its raw input, e.g., surpasses by a large margin and reaches classification accuracy.
7 Data Augmentation as InfoMin
7.1 A Unified View of Recent State-of-the-Art Methods
Recently, there are several contrastive learning methods dominating the ImageNet self-suspervised learning benchmark, e.g., InstDis, CPC, CMC, MoCo, PIRL, SimCLR, etc. Despite various different engineering tricks in each paper, each has created and in a way that implicitly follows the InfoMin principle. We examine a non-comprehensive selection of recent methods below:
Despite the difference of memory bank and moment encoder, these two methods create views by ancestral sampling: (1) sample a imagefrom the empirical distribution ; (2) sample two independent transformations from a distribution of data augmentation functions ; (3) let and .
CMC . On top of the two views and in InsDis, CMC further split images across color channels. This leads to a new set of views and , where is the first color channel of , and is the last two channels of . By this design, is theoretically guaranteed, and we observe that CMC performs better than InstDis.
PIRL . Comparing PIRL with InstDis is a bit tricky, but we can also explain it from the InfoMin perspective. Similarly, given two views obtained in InstDis, PIRL keeps but transforms the other view with random JigSaw shuffling to get . The mutual information between the two views is also reduced as introduces randomness.
SimCLR . Despite other engineering techniques and tricks, the way SimCLR creates views is most similar to InstDist and MoCo, but it uses a stronger class of augmentations , which leads to less mutual information between the two views.
CPC . Different from the above methods that create views at the image level, CPC extracts views , from local patches with strong data augmentation (e.g., RandAugment ) which reduces . In addition, cropping and from disjoint patches also reduces , which relates to the discussion in Section 5.2.1.
While the above methods are reducing mutual information between views, they keep information about object identity in both views. The hope is that object identity can bake in most of the high-level semantic information which various downstream tasks care about.
7.2 Analysis of Data Augmentation as it relates to MI and Transfer Performance
We gradually strengthen the family of data augmentation functions , and plot the trend between accuracy in downstream linear evaluation benchmarks and . The overall results are shown in Figure 10(a), where the plot is generated by only varying data augmentation while keeping all other settings fixed. We consider Color Jittering with various strengths, Gaussian Blur, RandAugment , and their combinations, as illustrated in Figure 10(b). The results suggest that as we reduce , via stronger (in theory, also decreases), the downstream accuracy keeps improving.
In PyTorch, theRandomResizedCrop(scale=(c, 1.0)) data augmentation function sets a low-area cropping bound c. Smaller c means more aggressive data augmentation. We vary c for both a linear critic head  (with temperature 0.07) and nonlinear critic head  (with temperature 0.15), as shown in Figure 11. In both cases, decreasing c forms a reverse-U shape between and linear classification accuracy, with a sweet spot at . This is different from the widely used
in the supervised learning setting. Usingcan lead to more than drop in accuracy compared to the optimal when a nonlinear projection head is applied.
Color Jittering. As shown in Figure 10(b), we adopt a parameter to control the strengths of color jittering function. As shown in Figure 12, increasing from to also traces a reverse-U shape, no matter whether a linear or nonlinear projection head is used. The sweet spot lies around , which is the same value as used in SimCLR . Practically, we see the accuracy is more sensitive around the sweet spot for the nonlinear projection head, which also happens for cropping. This implies that it is important to find the sweet spot for future design of augmentation functions.
Details. These plots are based on the MoCo  framework. We use
negatives and pre-train for 100 epochs on 8 GPUs with a batch size of 256. The learning rate starts asand decays following a cosine annealing schedule. For the downstream task of linear evaluation, we train the linear classifier for 60 epochs with an initial learning rate of 30, following .
7.3 Results on ImageNet Benchmark
|Local Agg. ||ResNet-50||24||Linear||200||58.8||-|
|CPC v2 ||ResNet-50||24||-||-||63.8||85.3|
|InfoMin Aug. (Ours)||ResNet-50||24||MLP||100||67.4||87.9|
|InfoMin Aug. (Ours)||ResNet-50||24||MLP||200||70.1||89.4|
|InfoMin Aug. (Ours)||ResNet-50||24||MLP||800||73.0||91.1|
On top of the “RA-CJ-Blur” augmentations shown in Figure 10, we further reduce the mutual information (or enhance the invariance) of views by using PIRL , i.e., adding JigSaw . This improves the accuracy of the linear classifier from to . Replacing the widely-used linear projection head [73, 65, 26] with a 2-layer MLP  increases the accuracy to . When using this nonlinear projection head, we found a larger temperature is beneficial for downstream linear readout (as also reported in ). All these numbers are obtained with 100 epochs of pre-training. For simplicity, we call such unsupervised pre-training as InfoMin pre-training (i.e., pre-training with our InfoMin inspired augmentation). As shown in Table 4, our InfoMin model trained with 200 epochs achieves , outperforming SimCLR with 1000 epochs. Finally, a new state-of-the-art, is obtained by training for 800 epochs. Compared to SimCLR requiring 128 TPUs for large batch training, our model can be trained with as less as 4 GPUs on a single machine.
For future improvement, there is still room for manually designing better data augmentation. As shown in Figure 10(a), using “RA-CJ-Blur” has not touched the sweet spot yet. Another way to is to learn to synthesize better views (augmentations) by following (and expanding) the idea of semi-supervised view learning method presented in Section 6.3.
7.4 Transferring Representations
One goal of unsupervised pre-training is to learn transferable representations that are beneficial for downstream tasks. The rapid progress of many vision tasks in past years can be ascribed to the paradigm of fine-tuning models that are initialized from supervised pre-training on ImageNet. When transferring to PASCAL VOC  and COCO , we found our InfoMin pre-training consistently outperforms supervised pre-training as well as other unsupervised pre-training methods.
Feature normalization has been shown to be important during fine-tuning . Therefore, we fine-tune the backbone with Synchronized BN (SyncBN ) and add SyncBN to newly initialized layers (e.g., FPN ). Table 5 reports the bounding box AP and mask AP on val2017 on COCO, using the Mask R-CNN  R50-FPN pipeline. All results are reported on Detectron2 . We notice that, among unsupervised approaches, only ours consistently outperforms supervised pre-training.
We have tried different popular detection frameworks with various backbones, extended the fine-tuning schedule (e.g., 6 schedule), and compared InfoMin ResNeXt-152  trained on ImageNet-1k with supervised ResNeXt-152 trained on ImageNet-5k (6 times larger than ImageNet-1k). In all cases, InfoMin consistently outperforms supervised pre-training. For further details on these results, as well as experiments on PASCAL VOC, please refer to the appendix.
We have proposed an InfoMin principle and a view synthesis framework for constructing effective views for contrastive representation learning. Viewing data augmentation as information minimization, we achieved a new state-of-the-art result on the ImageNet linear readout benchmark with a ResNet-50.
This work was done when Yonglong Tian was a student researcher at Google. We thank Kevin Murphy for fruitful and insightful discussion; Lucas Beyer for feedback on the manuscript; and Google Cloud team for supporting computation resources. Yonglong is grateful to Zhoutong Zhang for encouragement and feedback on experimental design.
Eirikur Agustsson and Radu Timofte.
Ntire 2017 challenge on single image super-resolution: Dataset and study.In , pages 126–135, 2017.
-  Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016.
-  Relja Arandjelovic and Andrew Zisserman. Objects that sound. In Proceedings of the European Conference on Computer Vision (ECCV), pages 435–451, 2018.
-  Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. arXiv preprint arXiv:1906.00910, 2019.
-  Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
-  Jorge Luis Borges. Funes, the memorious. na, 1962.
-  Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2018.
-  Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020.
-  Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020.
-  Soo-Whan Chung, Joon Son Chung, and Hong-Goo Kang. Perfect match: Improved cross-modal embeddings for audio-visual synchronisation. In ICASSP, 2019.
Adam Coates, Andrew Ng, and Honglak Lee.
An analysis of single-layer networks in unsupervised feature
Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215–223, 2011.
-  Thomas M Cover and Joy A Thomas. Entropy, relative entropy and mutual information. Elements of information theory, 2:1–55, 1991.
-  Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical data augmentation with no separate search. arXiv preprint arXiv:1909.13719, 2019.
-  Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 2009.
-  Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
-  Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016.
-  Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pages 1422–1430, 2015.
-  Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. In Advances in Neural Information Processing Systems, pages 10541–10551, 2019.
Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas
Discriminative unsupervised feature learning with convolutional neural networks.In NIPS, 2014.
-  Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 2010.
-  Ian Fischer. The conditional entropy bottleneck. arXiv preprint arXiv:2002.05379, 2020.
-  Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018.
-  Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
-  Daniel Gordon, Kiana Ehsani, Dieter Fox, and Ali Farhadi. Watching the world go by: Representation learning from unlabeled videos. arXiv preprint arXiv:2003.07990, 2020.
-  Tengda Han, Weidi Xie, and Andrew Zisserman. Video representation learning by dense predictive coding. In ICCV Workshop, 2019.
-  Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722, 2019.
-  Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, 2017.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.
-  Olivier J Hénaff, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272, 2019.
-  R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2019.
-  Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 1997.
-  Phillip Isola, Daniel Zoran, Dilip Krishnan, and Edward H. Adelson. Learning visual groups from co-occurrences in space and time. International Conference on Learning Representations (ICLR), Workshop track, 2016.
-  Anil K Jain. Fundamentals of digital image processing. Englewood Cliffs, NJ: Prentice Hall,, 1989.
-  Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. arXiv preprint arXiv:2004.11362, 2020.
-  Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, 2018.
-  Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
-  Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Revisiting self-supervised visual representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2019.
-  Tianhao Li and Limin Wang. Learning spatiotemporal features via video and text pair discrimination. arXiv preprint arXiv:2001.05691, 2020.
-  Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.
-  Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, 2014.
-  Ralph Linsker. Self-organization in a perceptual network. Computer, 21(3):105–117, 1988.
-  David McAllester and Karl Statos. Formal limitations on the measurement of mutual information. arXiv preprint arXiv:1811.04251, 2018.
-  Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman. End-to-end learning of visual representations from uncurated instructional videos. arXiv preprint arXiv:1912.06430, 2019.
-  Matthias Minderer, Olivier Bachem, Neil Houlsby, and Michael Tschannen. Automatic shortcut removal for self-supervised representation learning. arXiv preprint arXiv:2002.08822, 2020.
-  Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. arXiv preprint arXiv:1912.01991, 2019.
-  Pedro Morgado, Nuno Vasconcelos, and Ishan Misra. Audio-visual instance discrimination with cross-modal agreement. arXiv preprint arXiv:2004.12943, 2020.
-  Pushmeet Kohli Nathan Silberman, Derek Hoiem and Rob Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012.
-  Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016.
-  Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
-  Liam Paninski. Estimation of entropy and mutual information. Neural computation, 15(6):1191–1253, 2003.
-  Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2536–2544, 2016.
-  Mandela Patrick, Yuki M Asano, Ruth Fong, João F Henriques, Geoffrey Zweig, and Andrea Vedaldi. Multi-modal self-supervision from generalized data transformations. arXiv preprint arXiv:2003.04298, 2020.
-  Chao Peng, Tete Xiao, Zeming Li, Yuning Jiang, Xiangyu Zhang, Kai Jia, Gang Yu, and Jian Sun. Megdet: A large mini-batch object detector. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
-  Ben Poole, Sherjil Ozair, Aaron van den Oord, Alexander A Alemi, and George Tucker. On variational bounds of mutual information. arXiv preprint arXiv:1905.06922, 2019.
-  Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, 2015.
-  Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, and Google Brain. Time-contrastive networks: Self-supervised learning from video. In ICRA, 2018.
-  Ohad Shamir, Sivan Sabato, and Naftali Tishby. Learning and generalization with the information bottleneck. Theoretical Computer Science, 411(29-30):2696–2711, 2010.
-  Eero P Simoncelli. 4.7 statistical modeling of photographic images. Handbook of Video and Image Processing, 2005.
-  Stefano Soatto and Alessandro Chiuso. Visual representations: Defining properties and deep approximations. In ICLR, 2016.
-  Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. In NIPS, 2016.
-  Aravind Srinivas, Michael Laskin, and Pieter Abbeel. Curl: Contrastive unsupervised representations for reinforcement learning. arXiv preprint arXiv:2004.04136, 2020.
-  Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. Unsupervised learning of video representations using lstms. In International conference on machine learning, 2015.
-  Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743, 2019.
-  Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019.
-  Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive representation distillation. In ICLR, 2020.
-  Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000.
-  Michael Tschannen, Josip Djolonga, Marvin Ritter, Aravindh Mahendran, Neil Houlsby, Sylvain Gelly, and Mario Lucic. Self-supervised learning of video-induced visual invariances. arXiv preprint arXiv:1912.02783, 2019.
-  Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. arXiv preprint arXiv:1907.13625, 2019.
-  Mike Wu, Chengxu Zhuang, Daniel Yamins, and Noah Goodman. On the importance of views in unsupervised representation learning. preprint, 2020.
-  Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2. https://github.com/facebookresearch/detectron2, 2019.
-  Zhirong Wu, Alexei A Efros, and Stella X Yu. Improving generalization via scalable neighborhood component analysis. In ECCV, 2018.
-  Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3733–3742, 2018.
-  Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.
-  Mang Ye, Xu Zhang, Pong C Yuen, and Shih-Fu Chang. Unsupervised embedding learning via invariant and spreading instance feature. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer.
S4l: Self-supervised semi-supervised learning.In Proceedings of the IEEE international conference on computer vision, pages 1476–1485, 2019.
Richard Zhang, Phillip Isola, and Alexei A Efros.
Colorful image colorization.In European conference on computer vision, pages 649–666. Springer, 2016.
Richard Zhang, Phillip Isola, and Alexei A Efros.
Split-brain autoencoders: Unsupervised learning by cross-channel prediction.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1058–1067, 2017.
-  Nanxuan Zhao, Zhirong Wu, Rynson WH Lau, and Stephen Lin. Distilling localization for self-supervised representation learning. arXiv preprint arXiv:2004.06638, 2020.
-  Chengxu Zhuang, Alex Andonian, and Daniel Yamins. Unsupervised learning from video with deep neural embeddings. arXiv preprint arXiv:1905.11954, 2019.
-  Chengxu Zhuang, Alex Lin Zhai, and Daniel Yamins. Local aggregation for unsupervised learning of visual embeddings. arXiv preprint arXiv:1903.12355, 2019.
Appendix A Proof of Proposition 1
In this section, we provide proof for the statement regarding optimal views in proposition 1 of the main text. As a warmup, we firstly recap some properties of mutual information.
a.1 Properties of MI :
(2) Chain Rule:
(2) Multivariate Mutual Information:
According to Proposition 1, the optimal views for task with label , are views such that
Since , and , are functions of .
Therefore , due to the nonnegativity. Then we have:
Therefore the optimal views that minimizes subject to the constraint yields . Also note that optimal views are conditionally independent given , as now . ∎
Given optimal views and minimal sufficient encoders , , then the learned representations or are sufficient statistic of or for , i.e., or .
Let’s prove for . Since is a function of , we have:
To prove , we need to prove .
In the above derivation because is a function of ; because optimal views are conditional independent given , see Proposition A.1. Now, we can easily prove following a similar procedure in Proposition A.1. If we can further prove , then we get . By nonnegativity, we will have .
To see , recall that our encoders are sufficient. According to Definition 1, we have :
The representations and are also minimal for .
For all sufficient encoders, we have proved are sufficient statistic of for predicting . Namely . Now:
The minimal sufficient encoder will minimize to . This is achievable and leads to . Therefore, is a minimal sufficient statistic for predicting , thus optimal. Similarly, is also optimal. ∎
Appendix B Implementation Details
b.1 Colorful Moving MNIST
Dataset. Following the original Moving MNIST dataset , we use a canvas of size 6464, which contains a digit of size 2828. The back ground image is a random crop from original STL-10 images (9696). The starting position of the digit is uniformly sampled inside the canvas. The direction of the moving velocity is uniformly sampled in , while the magnitude is kept as of the canvas size. When the digit touches the boundary, the velocity is reflected.
Setup. We use the first 10 frames as (namely ), and we construct by referring to the 20-th frame (namely ). The CNN backbone consists of 4 layers with filters from low to high. Average pooling is applied after the last convolutional layer, resulting in a 64 dimensional representation. The dimensions of the hidden layer and output in LSTM are both 64.
Training. We perform intra-batch contrast. Namely, inside each batch of size 128, we contrast each sample with the other 127 samples. We train for 200 epochs, with the learning rate initialized as and decayed with cosine annealing.
b.2 Spatial Patches with Different Distance
Setup and Training. The backbone network is a tiny AlexNet, following . We train for epochs, with the learning rate initialized as and decayed with cosine annealing.
Evaluation. We evaluate the learned representation on both STL-10 and CIFAR-10 datasets. For CIFAR-10, we resize the image to 6464 to extract features. The linear classifier is trained for 100 epochs.
b.3 Channel Splitting with Various Color Spaces
Setup and Training. The backbone network is also a tiny AlexNet, with the modification of adapting the first layer to input of or channels. We follow the training recipe in .
Evaluation. For the evaluation on STL-10 dataset, we train a linear classifier for 100 epochs and report the single-crop classification accuracy. For NYU-Depth-v2 segmentation task, we freeze the backbone network and train a decoder on top of the learned representation. We report the mean IoU for labeled classes.
b.4 Frequency Separation
Setup and Training. The setup is almost the same as that in color channel splitting experiments, except that each view consists of three input channels. We follow the training recipe in .
Evaluation. We train a linear classifier for 100 epochs on STL-10 dataset and 40 epochs on TinyImageNet dataset.
b.5 Un-/Semi-supervised View Learning
Invertible Generator. Figure 13 shows the basic building block for the Volume-Preserving (VP) and None-Volume-Preserving (NVP) invertible view generator. The and are pixel-wise convolutional function, i.e., convolutional layers with 11 kernel. and represent a single channel of the input and output respectively, while and represent the other two channels. While stacking basic building blocks, we alternatively select the first, second, and the third channel as , to enhance the expressivity of view generator.
Setup and Training. For unsupervised view learning that only uses the adversarial loss, we found the training is relatively unstable, as also observed in GAN . We found the learning rate of view generator should be larger than that of approximator. Concretely, we use Adam optimizer , and we set the learning rates of view generator and approximator as - and -, respectively. For the semi-supervised view learning, we found the training is stable across different learning rate combinations, which we considered as an advantage. To be fair, we still use the same learning rates for both view generator and approximator.
Contrastive Learning and Evaluation. After the view learning stage, we perform contrastive learning and evaluation by following the recipe in B.3.
Appendix C Pascal VOC Object Detection
|InfoMin Aug. (ours)||82.7||57.6||64.6||70.1|
Appendix D Instance Segmentation on COCO
We evaluated the transferability of various models pre-trained with InfoMin, under different detection frameworks and fine-tuning schedules. In all cases we tested, models pre-trained with InfoMin outperform those pre-trained with supervised cross-entropy loss. Interestingly, ResNeXt-152 trained with InfoMin on ImageNet-1K beats its supervised counterpart trained on ImageNet 5K, which is 6 times larger. Bounding box AP and mask Ap are reported on val2017
d.1 ResNet-50 with Mask R-CNN, C4 architecture
The results of Mask R-CNN with R-50 C4 backbone are shown in Table 7. We experimented with 1 and 2 schedule.