Iterative Reorganization with Weak Spatial Constraints: Solving Arbitrary Jigsaw Puzzles for Unsupervised Representation Learning

Learning visual features from unlabeled image data is an important yet challenging task, which is often achieved by training a model on some annotation-free information. We consider spatial contexts, for which we solve so-called jigsaw puzzles, i.e., each image is cut into grids and then disordered, and the goal is to recover the correct configuration. Existing approaches formulated it as a classification task by defining a fixed mapping from a small subset of configurations to a class set, but these approaches ignore the underlying relationship between different configurations and also limit their application to more complex scenarios. This paper presents a novel approach which applies to jigsaw puzzles with an arbitrary grid size and dimensionality. We provide a fundamental and generalized principle, that weaker cues are easier to be learned in an unsupervised manner and also transfer better. In the context of puzzle recognition, we use an iterative manner which, instead of solving the puzzle all at once, adjusts the order of the patches in each step until convergence. In each step, we combine both unary and binary features on each patch into a cost function judging the correctness of the current configuration. Our approach, by taking similarity between puzzles into consideration, enjoys a more reasonable way of learning visual knowledge. We verify the effectiveness of our approach in two aspects. First, it is able to solve arbitrarily complex puzzles, including high-dimensional puzzles, that prior methods are difficult to handle. Second, it serves as a reliable way of network initialization, which leads to better transfer performance in a few visual recognition tasks including image classification, object detection, and semantic segmentation.


page 4

page 7


Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles

In this paper we study the problem of image representation learning with...

Unsupervised Image Classification for Deep Representation Learning

Deep clustering against self-supervised learning is a very important and...

CO2: Consistent Contrast for Unsupervised Visual Representation Learning

Contrastive learning has been adopted as a core method for unsupervised ...

Distribution Alignment: A Unified Framework for Long-tail Visual Recognition

Despite the recent success of deep neural networks, it remains challengi...

Visual Representation Learning with Transformer: A Sequence-to-Sequence Perspective

Visual representation learning is the key of solving various vision prob...

Learning Features by Watching Objects Move

This paper presents a novel yet intuitive approach to unsupervised featu...

Spatial Cross-Attention Improves Self-Supervised Visual Representation Learning

Unsupervised representation learning methods like SwAV are proved to be ...

1 Introduction

Figure 1: We study the problem of solving jigsaw puzzles for visual recognition. Compared to the previous work [27] which worked on puzzles and fixed configurations (left), we can solve this task in a generalized setting like 3D puzzles (right).

Deep learning especially convolutional neural networks has been boosting the performance of a wide range of applications in computer vision [22]

. These statistics-based approaches build hierarchical structures which contain a large number of neurons, so that visual knowledge is learned by fitting labeled training data 


. However, annotating a large-scale dataset is often difficult and expensive. Therefore, weakly supervised or unsupervised learning has attracted a lot of research attentions 

[43][21]. These approaches are often built on some naturally existing constraints such as temporal consistency [42], spatial relationship [6] and sum-up equations [28]

. Such information, though being weak, constructs loss functions without requiring annotations, and networks pre-trained in this way can either be used for weak visual feature extraction 


or fine-tuned in a standalone supervised learning process towards better recognition performance 


In this work, we focus on a specific way of exploiting spatial relationship, which is to solve jigsaw puzzles on unlabeled image data [27][29]. These approaches work by cutting an image into a grid, say, , of patches and then disordering them as training data, with the goal set to recover its correct spatial configuration. Examples are shown in Figure 1. Thus, in order to achieve this goal, the network should have the ability to capture some semantic information, e.g., learning the concept of car and ground, though not labeled, and knowing that car always appears above ground. Technically, these approaches simply assigned each configuration a unique ID, so that puzzle recognition turns into a plain classification problem. We point out two major drawbacks of this strategy. First, by plain classification, we assume that all configurations have the same similarity with each other, but this is often not the case, e.g., two configurations with only two patches swapped are often semantically closer than other two with no patches placed at the same position. Ignoring such information can bring in difficulties to representation learning. Second, the number of parameters required for plain classification increases linearly with the number of configurations, so that it is very difficult to deal with all possible configurations due to the risk of over-fitting. For example, there are possible configurations for a puzzle, but the original approach [27] reached the best performance at and observed over-fitting when this number continues growing. Both of these drawbacks limit us from generalizing this approach to more complex puzzles111It was widely believed that more powerful features can be learned in more difficult vision tasks [4], so we expect the ability of unsupervised learning to grow with the complexity of puzzles. like 3D puzzles222This is especially useful for some areas such as medical imaging analysis, in which 3D networks [3][25] cannot easily get pre-trained weights as in 2D scenarios, yet a reasonable initialization helps a lot in training stability and testing performance.. An empirical study of this topic can be found in Section 4.2.

In this paper, we extend the ability of such approaches by allowing it to solve arbitrary jigsaw puzzles, i.e., the puzzles are not constrained by a pre-defined set of configurations. Our major contribution is to provide a principle for unsupervised learning, that learning to recognize weak visual cues and then composing them into a complex scene is often easier and thus better in transfer. Thus, we solve jigsaw puzzles (i) in an iterative manner and (ii) using weak spatial cues, instead of determining the correct configuration all at once. To this end, we formulate puzzle recognition into an optimization problem which involves a set of unary and binary terms, with each unary term indicating whether a specified patch is located at a specified position, and each binary term measuring whether two patches should have a specified relative position. These terms are determined by a deep network backbone so that the entire system can be trained in an end-to-end manner. In both training and testing, we allow the first trial not to find the correct configuration, in which case we iterate using the configuration adjusted according to prediction until convergence. Both the above techniques, a.k.a.

, network heads, are used to improve the quality of pre-training. They do not apply to nor introduce additional computational costs to the transfer learning stage.

We evaluate our approach in both puzzle recognition and transfer learning. The puzzle solver is trained on the ILSVRC2012 training set [39] tested on the validation set, both of which do not contain class labels. Our approach solves arbitrary jigsaw puzzles with reasonable accuracy, while the prior approaches can only work on a limited set of puzzle. Then, we transfer the pre-trained model to extract features in small-scale datasets for image classification [13], as well as to be fine-tuned in the PascalVOC 2007 dataset [9] for image classification and object detection. Either learning from more complex puzzles or achieving a higher accuracy in puzzle recognition boosts transfer learning performance, which verifies our motivation. Finally, we apply our approach initialize a 3D network with unlabeled medical data, and verify its effectiveness in segmenting an abdominal organ from CT scans.

The remainder of this paper is organized as follows. Section 2 briefly reviews related work, and Section 3 describes the proposed approach. After experiments are shown in Section 4, we draw our conclusions in Section 5.

2 Related Work

Deep neural networks have been playing an important role in modern computer vision systems. With the availability of large-scale datasets [5] and powerful computational device such as GPUs, researchers have designed network structures with tens [19][40][41] or hundreds [14][15]

of layers towards better recognition performance. Also, the pre-trained networks in ImageNet were transferred to other recognition tasks by either extracting visual features directly 

[7][12][33] or being fine-tuned on a new loss function [24][34]. Despite their effectiveness, these networks still strongly rely on labeled image data, but in some areas such as medical imaging, data collection and annotation can be expensive, time-consuming, or requiring expertise. Thus, there has been efforts to design unsupervised [43][21] or weakly supervised [16]

approaches which learned visual knowledge from unlabeled data, or semi-supervised learning algorithms 

[30][31] which were aimed at combining a limited amount of labeled data and a large corpus of unlabeled data towards better performance. It has been verified that unsupervised pre-training helps supervised learning especially deep learning [8].

The key factor to learning from unlabeled data is to establish some kind of prior, or some weak constraints that naturally exist, i.e., no annotations are required. Such prior can be either (1) embedded into the network architecture or (2) encoded as a weak supervision to optimize the network. For the first type, researchers designed clustering-based approaches to optimize visual representation so as to be beneficial to clustering [45][2], as well as generator-based approaches which assumed that all images can be represented in a low-level space and trained encoders and/or decoders to recover the image and/or representation [32][48]. Network architectures of these approaches are often largely modified, e.g., with a set of clustering layers or encoder-decoder modules.

This paper mainly considers the second type which, in comparison to the type, is much easier in algorithmic design. Typical examples include temporal consistency which assumes that neighboring video frames contain similar visual contents [42], spatial relationship between some pairs of unlabeled patches [6], learning an additive function on different regions as well as the entire image [28], etc. Among these priors, spatial contexts are widely believed to contain rich information which a vision system should be able to capture. Going one step beyond modeling patch relationship [6], researchers designed so-called jigsaw puzzles [27][29] which are more complex so that the networks are better trained in learning to solve them. Consequently, such networks perform better in transfer learning.

Researchers believed that learning from these weakly-supervised cues can help visual recognition, because many problems are indeed built on understanding and integrating this type of information. Regarding spatial contexts, a wide range of recognition tasks can benefit from understanding the relative position of two (or more) patches, such as image classification [1], semantic segmentation [38] and parsing [47], etc.

3 Our Approach

3.1 Problem and Baseline Solution

The problem of puzzle recognition assumes that an image is partitioned into a grid (e.g., ) of patches and then disordered, and the task is to recover the original configuration (i.e., patches are ordered in the natural form). To accomplish this task, the network needs to understand what a patch contains as well as how two or more patches are related to each other (e.g., in a car image, a wheel is often located to the top of the ground). Therefore, we expect this task to teach a network both intra-patch and inter-patch information, which we formulate as unary terms and binary terms, respectively.

We first define the terminologies used in this paper. Let be an image, which is partitioned into patches. Each patch, denoted (, ), is assigned a unique ID according to its original position, e.g., the row-major policy gives . After that, all patches are randomly disordered, and we use to denote the ID owned by the patch that currently occupies the position. All values compose a configuration, denoted as . There are in total different configurations, composing the configuration set that .

Our goal is to predict the correct configuration . For this purpose, a network structure with two parts was constructed [27]. The network backbone is built upon each individual patch, and outputs a set of features for the network head to produce the final output , where is the ordered concatenation of patch features. In practice, is often borrowed from existing network architectures [19][40][14], while is often more interesting to investigate.

In the prior work [27][29], the network head worked by constraining the number of possible configurations, say out of , which are randomly sampled from using a greedy algorithm to guarantee the Hamming distance between any two configurations is sufficiently large. Then, was designed to be a

-way classifier, implemented as a fully-connected layer. The purpose of this design was mainly to control the number of parameters of the classifier (proportional to

) so as to prevent over-fitting333[27] observed that setting a larger leads to performance drop in transfer experiments, and explained it as the network gets confused by very similar jigsaw puzzles. However, as shown in experiments (see Section 4.2), our approach works well in the entire puzzle set , i.e., , which implies that the performance drop may due the large number of parameters., but we argue that it largely limits the model from being applied more complex scenarios like 3D puzzles, while it was believed that learning from a harder task can lead to a stronger ability [4]. This motivates us to propose a new approach in which the number of configurations can be arbitrarily large while the number of parameters remains unchanged. We will see later that the essence behind this motivation is to use weak cues with an iterative algorithm towards a more compact representation and a safer learning process.

3.2 Solving Jigsaw Puzzles with Weak Cues

Figure 2: The overall structure (best viewed in color). Each training image (without semantic annotations) is randomly cropped, disordered and fed into puzzle recognition network. Two types of loss terms (unary and binary) are computed and summed into the final cost function . The training process continues until the puzzle is completely correct or a maximal number of rounds is achieved.

We design a network head to learn weak spatial constraints. By “weak” we are comparing this strategy with the aforementioned -way classifier that predicts the configuration of the entire puzzle all at once. Instead, we consider an indirect cost function which outputs a cost that patch or equivalently feature is located at position

, and thus the most probable configuration is determined by

. is composed of two parts, namely, unary terms and binary terms. Each unary term provides cues for the absolute position of a patch, and each binary term provides cues for the relative position of two patches. Mathematically,


Here, is a unary term which measures how likely that patch is located at position , and is a binary term measures how likely that patches and have the spatial relationship indicated by and . Each unary term is computed based on

, the overall variable containing feature vectors of all patches, because the position of each patch

depends on the visual messages delivered by other patches. The binary terms, on the other hand, do not have such a dependency.

In practice, the unary terms are formulated in a matrix with elements, each of which, , indicates the cost obtained by putting the specified patch with ID at a specified position with ID . This is implemented by a fully-connected layer between and these elements, parameterized by . We perform the softmax function over all elements in each row, so that the scores corresponding to each patch sum to 444Ideally, the elements in each column should also sum to , but it is mathematically intractable if we hope to keep the ratio between all elements. There are two arguments. First, after normalizing scores in each row, we find that there often exists one major elements in each column, and the sum of each column is close to . Second, we add an additional loss term between the sum of each column and , but only observe to minor changes in either puzzle recognition accuracy or transfer learning performance.. Then, each unary term is the log-likelihood of the score at a specified position:


For each binary term involving and , we build another mapping from these two vectors to a -dimensional vector, with each index indicating the probability that the spatial relationship of and belongs to one of the possibilities, namely, the first patch is located to the top, bottom, left, right, top-left, top-right, bottom-left, bottom-right of the second patch or none of the above happens. Similarly, this is implemented using another fully-connected layer between ( denotes concatenation) and a -dimensional vector parameterized by followed by a softmax activation over these numbers. We denote as the relative position type between and , so that we can write the binary term as:


Compared to a plain classifier assigning a class index to each puzzle, the amount of parameters required by our approach is reduced. Take a puzzle as an example, and we assume that contains elements. On the one hand, the -way classifier requires parameters (a typical setting [27] is ) which grows linearly with . On the other hand, our approach requires parameters for the unary terms, and parameters for the binary terms. The total number of parameters, (e.g., for a puzzle), is largely reduced and does not increase with . Consequently, our approach is easier to be applied to the scenario with a larger set of (e.g., all possible) configurations. This advantage is verified in experiments.

Last but not least, there are many other ways of using weak spatial constraints to formulate – we just provide a practical example.

3.3 Optimization: Iterative Reorganization

We aim at optimizing with respect to network parameters , and configuration . However, note that is a discrete variable which cannot be optimized by gradient descent. So we apply different strategies in training and testing.

In the training stage, we know the ground-truth configuration , so the optimization becomes:


This is implemented by setting the supervision signal accordingly, i.e., the correct cells are filled up with while others with

, and using stochastic gradient descent. Note that each unary term depends on the order of input patches

555We fully-connect to the matrix, which is an asymmetric function and thus makes the output sensitive to the order of input. We can also design a symmetric function to deal with this issue, e.g., each patch is concatenated with the average-pooled vector of other patches to form the input, but this often causes information loss and leads to lower accuracy in both puzzle recognition and transfer learning tasks.. To sample more training data as well as adjust data distribution (explained later), we introduce iteration to the training stage. Denote the input configuration as , and the corresponding feature as . In each iteration, with fixed and , we maximize with respect to :


and use to find the next input , so that applying to obtains , e.g., if is perfect, then corresponds to the original configuration that every patch is placed at the correct position. This process continues until convergence or a maximal number of iterations is reached. The losses with respect to and are accumulated, averaged, and back-propagated to update these two parameters. The same strategy, iteration, is used at the testing stage to solve jigsaw puzzles, with the only difference that no gradient back-propagation is required.

It remains a problem to solve Eqn (5). This is a combinatoric optimization problem, as can only take discrete values which indicate the entries in and that are summed up. There is obviously no closed form solutions to maximize , yet enumerating all possibilities is computationally intractable especially when the puzzle size becomes large. A possible solution lies in approximation, which first switches off all binary terms, so that the optimization becomes choosing entries from a matrix with a maximal sum, but no two entries can appear in the same row or column (this is a max-cost-max-matching problem, and the best solution can be found using the Hungarian algorithm); then enumerates all possibilities within a limited Hamming distance from and chooses the one with the best overall cost .

Finally, we discuss strategy of introducing iteration to solve this problem. Mathematically, Eqn (5) is a fixed-point model [23], i.e., the output variable also impacts and thus , so iteration is considered a regular way of optimizing it. However, the roles played by iteration are different in training and testing. In the training stage, after each iteration, we shall expect the configuration to be adjusted closer to the ground-truth. Therefore, if we take the input configuration fed into each round as an individual case, then the distribution of input data is changed by iteration, and the cases that are more similar to the ground-truth are more likely to be sampled. Therefore, in the testing stage, we can expect the iteration to improve puzzle recognition accuracy, because as the iteration continues, the input puzzle gets closer to the ground-truth by statistics, and our model sees more training data in this scenario and is stronger. We show a typical example in Figure 3, in which we can observe how iteration gradually predicts the correct configuration.

4 Experiments

4.1 Jigsaw Puzzle Recognition

Setting Pre-training Options Puzzle Recognition PascalVOC 2007
ID Size Backbone Label Unary Binary Mirror Correct Classifi. Detec.
(a) AlexNet
(b) AlexNet
(c) AlexNet
(d) AlexNet
(e) AlexNet
(f) AlexNet
(g) ResNet18
(h) ResNet18
(i) ResNet18
(j) ResNet18
(k) ResNet18
(l) ResNet50
(m) ResNet50
(n) ResNet50
(o) ResNet50
(p) ResNet50
Competitors with Different Backbones, Pre-training Cues and Settings
Ref. Year Backbone Description of unsupervised training Classifi. Detec.
[6] 2015 AlexNet Determining the relative spatial position of two patches
[42] 2015 AlexNet Unsupervised tracking in videos
[27] 2016 AlexNet jigsaw puzzles with a -way plain classifier
[20] 2017 ResNet152 Predicting color from gray-scale intensity
[28] 2017 AlexNet Counting visual primitives in subregions
[2] 2018 AlexNet Classifying after clustering iteratively
[10] 2018 AlexNet Predicting 2D image rotations
[26] 2018 AlexNet [6] with enhancement techniques
[29] 2018 VGGNet16 [27] with knowledge distillation and noisy patches
[35] 2018 AlexNet Predicting surface normal, depth, and instance contour
Table 1: Puzzle recognition and transfer learning accuracy (). In the pre-training options, “labeled” means to use the annotated ILSVRC2012 training set to pre-train a network. The instances without any ✓imply that PascalVOC 2007 tasks are trained from scratch. We also compare with prior approaches, some of which have different knowledge sources, network backbones and training strategies. We report the most powerful network backbone used in each paper. The works with puzzle recognition are highlighted in green.

We follow [27] to train and evaluate puzzle recognition on the ILSVRC2012 dataset [39], a subset of the ImageNet database [5]. We train the model using all the training images and test it on the validation set with images, both of which do not contain class annotations.

In the training stage, we pre-process the images to prevent the model from being disturbed by pixel-level information. We first determine the size of puzzles, e.g., , and then resize each input image into and partition it evenly into a grid. In each image, we randomly crop a subimage as the patch fed into the puzzle recognition network. To maximally reduce the possibility that low-level information is used, we further horizontally flip each input patch with a probability of and subtract mean value from each channel – we do not perform other data augmentation techniques because they are less likely to appear in real data. In practice, flip augmentation brings consistent accuracy gain to transfer learning tasks though we observe significant accuracy drop in puzzle recognition (see Table 1).

Figure 3: Two examples of different difficulties in iterative puzzle recognition (best viewed in color). Each digit to the lower-left corner of each patch is the corresponding patch ID. For each round, we also report puzzle recognition statistics over the entire testing set.

The backbone of our puzzle network is borrowed from two popular architectures, namely, an -layer AlexNet [19] and two deep ResNets [14] with and layers. We do not evaluate VGGNet [40] as in [20][29] because it is more difficult to initialize and produces lower accuracy than ResNets. The outputs of the first layer with a spatial resolution of (i.e., fc6 in AlexNet and avg-pool in ResNets) are fed into a -way fully-connected layer and the output is taken as , followed by our designed layers for extracting unary and binary terms for puzzle recognition. All these networks are trained from scratch. We use the SGD optimizer and a total of iterations (mini-batches) for AlexNet and for ResNets. Each batch contains puzzles. On four NVIDIA Titan-V100 GPUs, the training times on AlexNet, ResNet18 and ResNet50 are , and hours, respectively.

In the testing stage, to reduce randomization factors, we switch off randomization in patch cropping and data augmentation, with each patch cropped at the center of the fields and not flipped. Results are summarized in Table 1. We first evaluate puzzle recognition accuracy. For each image, there are possible puzzles, so random guess gives a accuracy. With only unary terms (Eqn 5 can be solved by the Hungarian algorithm), all network backbones achieve over accuracy without mirror augmentation, which shows that weak visual cues can be combined to infer global patch contexts.

On top of this baseline, we investigate the impact of other four options. First, adding binary terms consistently improves puzzle recognition accuracy, arguably due to the additional contextual information, which is especially useful in determining the relative position of two neighboring patches. Second, mirror augmentation reduces puzzle recognition accuracy dramatically in both training and testing, but as we will see later, this strategy improves the generalization ability of our pre-trained models to other recognition tasks. Third, compared with puzzles, jigsaw puzzles are naturally more difficult to solve, but they also force the model to learn more visual knowledge and thus help transfer learning, as shown in our later discussions. Fourth, the above phenomena remain the same as the network backbone becomes stronger, on which both puzzle recognition and transfer visual recognition becomes more accurate.

As a side comment, we point out that conventional puzzle recognition approaches with plain classification [27][29] often achieved higher puzzle recognition accuracy in a limited class set. With models trained with our approach (Line (e) in Table 1) we enumerate the classes generated with algorithm provided by [27] and find the maximal , so as to mimic the behavior of plain classification. Our models with AlexNet reports a puzzle recognition accuracy which is lower than reported in [27]. However, our approach enjoys better transfer ability, as we will see in later experiments. In addition, the performance of [27] degenerates with increased puzzle size, as the fraction of explored puzzles becomes smaller, yet the weakness of ignoring underlying relationship between different configurations becomes more significant and harmful. From this perspective, the advantage of solving arbitrary puzzles becomes clearer. The same phenomenon also happens in 3D puzzles (Section 4.3).

Some statistics for our model with ResNet50 (Line (p) in Table 1) as well as two typical examples are shown in Figure 3 (one is difficult and not solved). We can observe how the disordered patches are reorganized with weak spatial cues throughout an iterative process. As an ablation study, we experiment with fewer numbers of maximal iterations, namely , and instead of , but achieve lower accuracies in both puzzle recognition and transfer learning tasks. This justifies our hypothesis that iteration, together with weak spatial cues, provides a mild way of unsupervised learning, which better fits state-of-the-art deep networks.

4.2 Transfer Learning Performance

Next, we investigate how well our models pre-trained on puzzle recognition transfer to other visual recognition tasks. Following the conventions [29][2], we evaluate classification and detection tasks on the PascalVOC 2007 dataset [9]. All pre-trained networks undergo a standard fine-tuning flowchart, with a plain classifier and Fast-RCNN [11] being used as network heads, respectively. We do not lock any layers in our network, because this often leads to worse transfer performance as shown in prior approaches [27][2][10].

Results are summarized in Table 1. We can observe some interesting phenomena. First, transfer recognition performance goes up with the power of network backbones, which shows the ability of our approach to tap the potential of deep networks. Second, both unary and binary terms contribute to transfer accuracy and they are complementary. Third, mirror augmentation harms puzzle recognition but improves transfer learning, because it alleviates the chance that deep networks borrow low-level pixel continuity in solving the jigsaw puzzles which falls into the category of over-fitting and helps transfer recognition very little.

Here is a side note. It was suggested in [27] that forcing the network to discriminate very similar puzzles (e.g., only a pair of patches are reversed) often leads to accuracy drop because the model can focus too much on local patterns. In the context of using AlexNet to solve puzzles, we study different numbers of configurations, i.e., (), () and all () possible puzzles. We find that our approach reports the best transfer accuracy at the last option, while using smaller numbers of configurations leads to slightly worse performance. Hence, we make the following conjecture: it is indeed the larger number of parameters in a plain classifier, rather than solving very similar puzzles, that causes transfer performance drop.

Last, we evaluate the quality of features extracted from the pre-trained models directly (the first fully-connected layer, without being fine-tuned). We apply a linear SVM with to the Caltech256 dataset [13] for generic object classification. Our model based on AlexNet with unary terms, binary terms and mirror augmentation (Line (e) in Table 1) reports a accuracy, but our direct competitors [27] and [29] only reports and , respectively, almost of the same quality as a randomly-initialized AlexNet ().

4.3 Generalization to 3D Networks

Finally, we apply our model to a 3D visual recognition task, which lies in the area of medical imaging analysis, an important prerequisite for computer-assisted diagnosis (CAD). Most medical data are volumetric (i.e., appearing in a 3D form), and researchers have proposed some 3D network architectures [3][25]. Compared to 2D networks [36][46], 3D networks enjoy the benefit of seeing more contextual information, but still suffer the drawback of missing a pre-trained model. Due to the common situation that the amount of training data is limited, these 3D networks often have a relatively unstable training process and sometimes this downgrades their testing accuracy [44].

Our approach provides a solution for initializing 3D networks with jigsaw puzzles. We investigate the NIH pancreas segmentation dataset [37], which contains cases. We partition it into folds (around cases in each fold), use three of them to train a segmentation model and test it on the remaining one. To construct jigsaw puzzles, we either directly use the training samples in the NIH dataset, or refer to another public dataset named Medical Segmentation Decathlon (MSD)666 – the pancreas tumour subset with training cases. For all the data used for jigsaw puzzles, we do not use any pixel-level annotations though they are provided. We randomly crop volumes within each case, and cut it evenly into two puzzle sizes, namely, pieces with a subvolume cropped within each cell, or pieces with a subvolume cropped within each cell. A typical example is shown in Figure 1. We randomly disorder these patches using all or possible configurations, and the task is to recover the original configuration. We use VNet [25] as the baseline (only the down-sampling layers are used in this stage), and compute the unary terms in an or matrix. We switch off the binary terms based on the consideration that one patch has neighbors in the 3D space which makes prediction over-complicated.

Data Scratch Pre-trained on NIH Pre-trained on MSD
Table 2: Pancreas segmentation accuracy (DSC, ) with different amounts of training data and different initialization techniques. In each group, the accuracy is averaged over testing cases.

Now we recover the complete VNet structure with randomly-initialized up-sampling layers and start training on the NIH training set ( cases) as well as its subsets. Results are shown in Table 2 revealing some useful knowledge. First, pre-training on jigsaw-puzzles indeed helps segmentation especially in the scenarios of fewer training data. Second, visual knowledge learned in this manner can transfer across different datasets regardless of the different distributions in intensity (caused by the scanning device). Third, constructing larger and thus more difficult puzzles improves the basic ability of networks. This the value of our research – note that it is unlikely for the baseline approach to sufficiently explore the space of puzzles, which has different configurations.

5 Conclusions

This work generalizes the framework of jigsaw puzzle recognition which was previously studied in a constrained case. To this end, we change the network head from a plain -way classifier to a combinatoric optimization problem which uses both unary and binary weak spatial cues. This strategy reduces the number of learnable parameters in the model, and thus alleviates the risk of over-fitting. The increased flexibility of pre-training allows us to apply our approach to a wide range of transfer learning tasks, including directly using it for feature extraction, and generalizing it to the 3D scenarios to provide an initialization for other tasks, e.g., medical imaging segmentation.

Our study reveals the ease and benefits of learning to recognize weak visual cues in unsupervised learning, in which the key problem often lies in finding a compact way of representing knowledge, e.g., decomposing the entire puzzle into unary and binary terms. We point out that the exploration of unsupervised learning is still far from the end. In the future, we will also apply our method to less structured data such as graphs [18] and more structured data such as videos [17], and explore its ability of learning visual knowledge in an unsupervised manner.


  • [1] J. Aghajanian, J. Warrell, S. J. Prince, P. Li, J. L. Rohn, and B. Baum. Patch-based within-object classification. In Computer Vision, 2009 IEEE 12th International Conference on, pages 1125–1132. IEEE, 2009.
  • [2] M. Caron, P. Bojanowski, A. Joulin, and M. Douze. Deep clustering for unsupervised learning of visual features. In European Conference on Computer Vision, 2018.
  • [3] O. Cicek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger. 3d u-net: learning dense volumetric segmentation from sparse annotation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2016.
  • [4] J. Deng, A. C. Berg, K. Li, and L. Fei-Fei. What does classifying more than 10,000 image categories tell us? In European Conference on Computer Vision, 2010.
  • [5] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In

    Computer Vision and Pattern Recognition

    , 2009.
  • [6] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised visual representation learning by context prediction. In International Conference on Computer Vision, 2015.
  • [7] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In

    International Conference on Machine Learning

    , 2014.
  • [8] D. Erhan, Y. Bengio, A. Courville, P. A. Manzagol, P. Vincent, and S. Bengio. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11(Feb):625–660, 2010.
  • [9] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303–338, 2010.
  • [10] S. Gidaris, P. Singh, and N. Komodakis. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations, 2018.
  • [11] R. Girshick. Fast r-cnn. In International Conference on Computer Vision, 2015.
  • [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition, 2014.
  • [13] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. 2007.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition, 2016.
  • [15] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. In Computer Vision and Pattern Recognition, 2017.
  • [16] A. Joulin, L. van der Maaten, A. Jabri, and N. Vasilache. Learning visual features from large weakly supervised data. In European Conference on Computer Vision, 2016.
  • [17] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In Computer Vision and Pattern Recognition, 2014.
  • [18] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
  • [19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012.
  • [20] G. Larsson, M. Maire, and G. Shakhnarovich. Colorization as a proxy task for visual understanding. In Computer Vision and Pattern Recognition, 2017.
  • [21] Q. V. Le. Building high-level features using large scale unsupervised learning. In International Conference on Speech and Signal Processing, 2013.
  • [22] Y. LeCun, Y. Bengio, and G. E. Hinton. Deep learning. Nature, 521(7553):436, 2015.
  • [23] Q. Li, J. Wang, D. Wipf, and Z. Tu. Fixed-point model for structured labeling. In International Conference on Machine Learning, 2013.
  • [24] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Computer Vision and Pattern Recognition, 2015.
  • [25] F. Milletari, N. Navab, and S. A. Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In International Conference on 3D Vision, 2016.
  • [26] T. N. Mundhenk, D. Ho, and B. Y. Chen. Improvements to context based self-supervised learning. In Computer Vision and Pattern Recognition, 2018.
  • [27] M. Noroozi and P. Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, 2016.
  • [28] M. Noroozi, H. Pirsiavash, and P. Favaro. Representation learning by learning to count. In International Conference on Computer Vision, 2017.
  • [29] M. Noroozi, A. Vinjimoor, P. Favaro, and H. Pirsiavash. Boosting self-supervised learning via knowledge transfer. In Computer Vision and Pattern Recognition, 2018.
  • [30] G. Papandreou, L. C. Chen, K. Murphy, and A. L. Yuille. Weakly- and semi-supervised learning of a dcnn for semantic image segmentation. International Conference on Computer Vision, 2015.
  • [31] S. Qiao, W. Shen, Z. Zhang, B. Wang, and A. L. Yuille. Deep co-training for semi-supervised image recognition. 2018.
  • [32] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations, 2016.
  • [33] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In Computer Vision and Pattern Recognition, 2014.
  • [34] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, 2015.
  • [35] Z. Ren and Y. J. Lee. Cross-domain self-supervised multi-task feature learning using synthetic imagery. In Computer Vision and Pattern Recognition, 2018.
  • [36] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015.
  • [37] H. R. Roth, L. Lu, A. Farag, H. Shin, J. Liu, E. B. Turkbey, and R. M. Summers. Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015.
  • [38] F. Rousseau, P. A. Habas, and C. Studholme. A supervised patch-based approach for human brain labeling. IEEE transactions on medical imaging, 30(10):1852–1862, 2011.
  • [39] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
  • [40] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
  • [41] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, et al. Going deeper with convolutions. In Computer Vision and Pattern Recognition, 2015.
  • [42] X. Wang and A. Gupta. Unsupervised learning of visual representations using videos. In International Conference on Computer Vision, 2015.
  • [43] K. Q. Weinberger and L. K. Saul. Unsupervised learning of image manifolds by semidefinite programming. International Journal of Computer Vision, 70(1):77–90, 2006.
  • [44] Y. Xia, L. Xie, F. Liu, Z. Zhu, E. K. Fishman, and A. L. Yuille. Bridging the gap between 2d and 3d organ segmentation with volumetric fusion net. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2018.
  • [45] J. Yang, D. Parikh, and D. Batra. Joint unsupervised learning of deep representations and image clusters. In Computer Vision and Pattern Recognition, 2016.
  • [46] Q. Yu, L. Xie, Y. Wang, Y. Zhou, E. K. Fishman, and A. L. Yuille.

    Recurrent saliency transformation network: Incorporating multi-stage visual cues for small organ segmentation.

    In Computer Vision and Pattern Recognition, 2018.
  • [47] Z. Zhang, C. Xie, J. Wang, L. Xie, and A. L. Yuille. Deepvoting: An explainable framework for semantic part detection under partial occlusion. In Computer Vision and Pattern Recognition, 2018.
  • [48] J. Y. Zhu, T. Park, P. Isola, and A. A. Efros.

    Unpaired image-to-image translation using cycle-consistent adversarial networks.

    In IEEE International Conference on Computer Vision, 2017.