Invariant Information Distillation for Unsupervised Image Segmentation and Clustering

07/17/2018 ∙ by Xu Ji, et al. ∙ University of Oxford 2

We present a new method that learns to segment and cluster images without labels of any kind. A simple loss based on information theory is used to extract meaningful representations directly from raw images. This is achieved by maximising mutual information of images known to be related by spatial proximity or randomized transformations, which distills their shared abstract content. Unlike much of the work in unsupervised deep learning, our learned function outputs segmentation heatmaps and discrete classifications labels directly, rather than embeddings that need further processing to be usable. The loss can be formulated as a convolution, making it the first end-to-end unsupervised learning method that learns densely and efficiently (i.e. without sampling) for semantic segmentation. Implemented using realistic settings on generic deep neural network architectures, our method attains superior performance on COCO-Stuff and ISPRS-Potsdam for segmentation and STL for clustering, beating state-of-the-art baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Most supervised deep learning methods require large quantities of manually labelled data, limiting their applicability in many scenarios. This is true for large-scale image classification and even more for segmentation (pixel-wise classification) where the annotation cost per image is very high [37, 20]. Unsupervised clustering, on the other hand, aims to group data points into classes entirely without labels [24]

. Many authors have sought to combine mature clustering algorithms with deep learning, for example by bootstrapping network training with k-means style objectives 

[49, 23, 7]. However, trivially combining clustering and representation learning methods often leads to degenerate solutions [7, 49]. It is precisely to prevent such degeneracy that cumbersome pipelines — involving pre-training, feature post-processing (whitening or PCA), clustering mechanisms external to the network — have evolved [7, 16, 17, 49].

Figure 1:

Models trained with IIC on entirely unlabelled data learn to cluster images (top, STL10) and patches (bottom, Potsdam-3). The raw clusters found directly correspond to semantic classes (dogs, cats, trucks, roads, vegetation etc.) with state-of-the-art accuracy. Training is end-to-end and randomly initialised, with no heuristics used at any stage.

In this paper, we introduce Invariant Information Clustering (IIC), a method that addresses this issue in a more principled manner. IIC is a generic clustering algorithm that directly trains a randomly initialised neural network into a classification function, end-to-end and without any labels. It involves a simple objective function, which is the mutual information between the function’s classifications for paired data samples. The input data can be of any modality and, since the clustering space is discrete, mutual information can be computed exactly.

Despite its simplicity, IIC is intrinsically robust to two issues that affect other methods. The first is clustering degeneracy, which is the tendency for a single cluster to dominate the predictions or for clusters to disappear (which can be observed with k-means, especially when combined with representation learning [7]). Due to the entropy maximisation component within mutual information, the loss is not minimised if all images are assigned to the same class. At the same time, it is optimal for the model to predict for each image a single class with certainty (i.e. one-hot) due to the conditional entropy minimisation (fig. 3). The second issue is noisy data with unknown or distractor classes (present in STL10 [10]

for example). IIC addresses this issue by employing an auxiliary output layer that is parallel to the main output layer, trained to produce an overclustering (i.e. same loss function but greater number of clusters than the ground truth) that is ignored at test time. Auxiliary overclustering is a general technique that could be useful for other algorithms. These two features of IIC contribute to making it the only method amongst our unsupervised baselines that is robust enough to make use of the noisy unlabelled subset of STL10, a version of ImageNet 

[14] specifically designed as a benchmark for unsupervised clustering.

In the rest of the paper, we begin by explaining the difference between semantic clustering and intermediate representation learning (section 2), which separates our method from the majority of work in unsupervised deep learning. We then describe the theoretical foundations of IIC in statistical learning (section 3), demonstrating that maximising mutual information between pairs of samples under a bottleneck is a principled clustering objective which is equivalent to distilling their shared abstract content (co-clustering). We propose that for static images, an easy way to generate pairs with shared abstract content from unlabelled data is to take each image and its random transformation, or each patch and a neighbour. We show that maximising MI automatically avoids degenerate solutions and can be written as a convolution in the case of segmentation, allowing for efficient implementation with any deep learning library.

We perform experiments on a large number of datasets (section 4) including STL, CIFAR, MNIST, COCO-Stuff and Potsdam, setting a new state-of-the-art on unsupervised clustering and segmentation in all cases, with results of 61.0%, 61.7% and 72.3% on STL10, CIFAR10 and COCO-Stuff-3 beating the closest competitors (53.0%, 52.2%, 54.0%) with significant margins. Note that training deep neural networks to perform large scale, real-world segmentations from scratch, without labels or heuristics, is a highly challenging task with negligible precedent. We also perform an ablation study and additionally test two semi-supervised modes, setting a new global state-of-the-art of 88.8% on STL10 over all supervised, semi-supervised and unsupervised methods, and demonstrating the robustness in semi-supervised accuracy when 90% of labels are removed.

2 Related work

Figure 2: IIC for image clustering. Dashed line denotes shared parameters, is a random transformation, and denotes mutual information (eq. 3).
Figure 3:

Training with IIC on unlabelled MNIST in successive epochs from random initialisation (left). The network directly outputs cluster assignment probabilities for input images, and each is rendered as a coordinate by convex combination of 10 cluster vertices. There is no cherry-picking as the entire dataset is shown in every snapshot. Ground truth labelling (unseen by model) is given by colour. At each cluster the average image of its assignees is shown. With neither labels nor heuristics, the clusters discovered by IIC correspond perfectly to unique digits, with one-hot certain prediction (right).

Co-clustering and mutual information.

The use of information as a criterion to learn representations is not new. One of the earliest works to do so is by Becker and Hinton [3]. More generally, learning from paired data has also been explored in co-clustering [24] and in other works [47] that build on the information bottleneck principle [19].

Several recent papers have used information as a tool to train deep networks in particular, albeit not for discrete clustering. IMSAT [27] maximises mutual information between data and its representation and DeepINFOMAX [26]

maximizes information between spatially-preserved features and compact features. However, IMSAT and DeepINFOMAX combine information with other criteria, whereas in our method information is the only criterion used. Furthermore, both IMSAT and DeepINFOMAX compute mutual information over continuous random variables, which requires complex estimators 

[4], whereas IIC does so for discrete variables with simple and exact computations. Finally, DeepINFOMAX considers the information between the features and a deterministic function of it, which is in principle the same as the entropy ; in contrast, in IIC information does not trivially reduce to entropy.

Semantic clustering versus intermediate representation learning.

In semantic clustering, the learned function directly outputs discrete assignments for high level (i.e. semantic) clusters. Intermediate representation learners, on the other hand, produce continuous, distributed, high-dimensional representations that must be post-processed, for example by k-means, to obtain the discrete low-cardinality assignments required for unsupervised semantic clustering. The latter includes objectives such as generative autoencoder image reconstruction 

[46], triplets [44] and spatial-temporal order or context prediction [36, 12, 16], for example predicting patch proximity [29], solving jigsaw puzzles [40] and inpainting [41]. Note it also includes a number of clustering methods (DeepCluster [7], exemplars [17]) where the clustering is only auxiliary; a clustering-style objective is used but does not produce groups with semantic correspondence. For example, DeepCluster [7] is a state-of-the-art method for learning highly-transferable intermediate features using overclustering as a proxy task, but does not automatically find semantically meaningful clusters. As these methods use auxiliary objectives divorced from the semantic clustering objective, it is unsurprising that they perform worse than IIC (section 4), which directly optimises for it, training the network end-to-end with the final clusterer implicitly wrapped inside.

Optimising image-to-image distance.

Many approaches to deep clustering, whether semantic or auxiliary, utilise a distance function between input images that approximates a given grouping criterion. Agglomerative clustering [2] and partially ordered sets [1] of HOG features [13] have been used to group images, and exemplars [17] define a group as a set of random transformations applied to a single image. Note the latter does not scale easily, in particular to image segmentation where a single image would call for 40k classes. DAC [8], JULE [50], DeepCluster [7], ADC [23] and DEC [49] rely on the inherent visual consistency and disentangling properties [22] of CNNs to produce cluster assignments, which are processed and reinforced in each iteration. The latter three are based on k-means style mechanisms to refine feature centroids, which is prone to degenerate solutions [7] and thus needs explicit prevention mechanisms such as pre-training, cluster-reassignment or feature cleaning via PCA and whitening [49, 7].

Invariance as a training objective.

Optimising for function outputs to be persistent through spatio-temporal or non-material distortion is an idea shared by IIC with several works, including exemplars [17], IMSAT [27], proximity prediction [29], the denoising objective of Tagger [21], temporal slowness constraints [53], and optimising for features to be invariant to local image transformations [45, 28]. More broadly, the problem of modelling data transformation has received significant attention in deep learning, one example being the transforming autoencoder [25].

3 Method

First we introduce a generic objective, Invariant Information Clustering, which can be used to cluster any kind of unlabelled paired data by training a network to predict cluster identities (section 3.1). We then apply it to image clustering (section 3.2, fig. 2 and fig. 3) and segmentation (section 3.3), by generating the required paired data using random transformations and spatial proximity.

3.1 Invariant Information Clustering

Let

be a paired data sample from a joint probability distribution

. For example, and could be different images containing the same object. The goal of Invariant Information Clustering (IIC) is to learn a representation that preserves what is in common between and while discarding instance-specific details. The former can be achieved by maximizing the mutual information between encoded variables:

(1)

which is equivalent to maximising the predictability of from and vice versa.

An effect of equation eq. 1, in general, is to make representations of paired samples the same. However, it is not the same as merely minimising representation distance, as done for example in methods based on k-means [7, 23]: the presence of entropy within allows us to avoid degeneracy, as discussed in detail below.

If is a neural network with a small output capacity (often called a “bottleneck”), eq. 1 also has the effect of discarding instance-specific details from the data. Clustering imposes a natural bottleneck, since the representation space is

, a finite set of class indices (as opposed to an infinite vector space). Without a bottleneck, i.e. assuming unbounded capacity,

eq. 1 is trivially solved by setting to the identity function because of the data processing inequality [11], i.e. .

Since our goal is to learn the representation with a deep neural network, we consider soft rather than hard clustering, meaning the neural network

is terminated by a (differentiable) softmax layer. Then the output

can be interpreted as the distribution of a discrete random variable

over classes, formally given by . Making the output probabilistic amounts to allowing for uncertainty in the cluster assigned to an input.

Consider now a pair of such cluster assignment variables and for two inputs and

respectively. Their conditional joint distribution is given by

This equation states that and are independent when conditioned on specific inputs and ; however, in general they are not independent after marginalization over a dataset of input pairs , . For example, for a trained classification network and a dataset of image pairs where each image contains the same object of its pair but in a randomly different position, the random variable constituted by the class of the first of each pair, , will have a strong statistical relationship with the random variable for the class of the second of each pair, ; one is predictive of the other (in fact identical to it, in this case) so they are highly dependent. After marginalization over the dataset (or batch, in practice), the joint probability distribution is given by the matrix , where each element at row and column constitutes :

(2)

The marginals and can be obtained by summing over the rows and columns of this matrix. As we generally consider symmetric problems, where for each we also have , is symmetrized using .

Now the objective function eq. 1 can be computed by plugging the matrix into the expression for mutual information [35], which results in the formula:

(3)

Why degenerate solutions are avoided.

Mutual information (3) expands to . Hence, maximizing this quantity trades-off minimizing the conditional cluster assignment entropy and maximising individual cluster assignments entropy . The smallest value of is 0, obtained when the cluster assignments are exactly predictable from each other. The largest value of is , obtained when all clusters are equally likely to be picked. This occurs when the data is assigned evenly between the clusters, equalizing their mass. Therefore the loss is not minimised if all samples are assigned to a single cluster (i.e. output class is identical for all samples). Thus as maximising mutual information naturally balances reinforcement of predictions with mass equalization, it avoids the tendency for degenerate solutions that algorithms which combine k-means with representation learning are susceptible to [7]. For further discussion of entropy maximisation, and optionally how to prioritise it with an entropy coefficient, see supplementary material.

Meaning of mutual information.

The reader may now wonder what are the benefits of maximising mutual information, as opposed to merely maximising entropy. Firstly, due to the soft clustering, entropy alone could be maximised trivially by setting all prediction vectors

to uniform distributions, resulting in no clustering. This is corrected by the conditional entropy component, which encourages deterministic one-hot predictions. For example, even for the degenerate case of identical pairs

, the IIC objective encourages a deterministic clustering function (i.e.  is a one-hot vector) as this results in null conditional entropy . Secondly, the objective of IIC is to find what is common between two data points that share redundancy, such as different images of the same object, explicitly encouraging distillation of the common part while ignoring the rest, i.e. instance details specific to one of the samples. This would not be possible without pairing samples.

3.2 Image clustering

IIC requires a source of paired samples , which are often unavailable in unsupervised image clustering applications. In this case, we propose to use generated image pairs, consisting of image and its randomly perturbed version . The objective eq. 1 can thus be written as:

(4)

where both image and transformation are random variables. Useful

could include scaling, skewing, rotation or flipping (geometric), changing contrast and colour saturation (photometric), or any other perturbation that is likely to leave the content of the image intact. IIC can then be used to recover the factor which is

invariant

to which of the pair is picked. The effect is to learn a function that partitions the data such that clusters are closed to the perturbations, without dropping clusters. The objective is simple enough to be written in six lines of PyTorch code  (

fig. 4).

def IIC(z, zt, C=10):
  P = (z.unsqueeze(2) * zt.unsqueeze(1)).sum(dim=0)
  P = ((P + P.t()) / 2) / P.sum()
  P[(P < EPS).data] = EPS
  Pi = P.sum(dim=1).view(C, 1).expand(C, C)
  Pj = P.sum(dim=0).view(1, C).expand(C, C)
  return (P * (log(Pi) + log(Pj) - log(P))).sum()
Figure 4: IIC objective in PyTorch. Inputs z and zt are matrices, with predicted cluster probabilities for sampled pairs (i.e. CNN softmaxed predictions). For example, the prediction for each image in a dataset and its transformed version (e.g. using standard data augmentation).

Auxiliary overclustering.

For certain datasets (e.g. STL10), training data comes in two types: one known to contain only relevant classes and the other known to contain irrelevant or distractor classes. It is desirable to train a clusterer specialised for the relevant classes, that still benefits from the context provided by the distractor classes, since the latter is often much larger (for example 100K compared to 13K for STL10). Our solution is to add an auxiliary overclustering head to the network (fig. 2) that is trained with the full dataset, whilst the main output head is trained with the subset containing only relevant classes. This allows us to make use of the noisy unlabelled subset despite being an unsupervised clustering method. Other methods are generally not robust enough to do so and thus avoid the 100k-samples unlabelled subset of STL10 when training for unsupervised clustering ([8, 23, 49]). Since the auxiliary overclustering head outputs predictions over a larger number of clusters than the ground truth, whilst still maintaining a predictor that is matched to ground truth number of clusters (the main head), it can be useful in general for increasing expressivity in the learned feature representation, even for datasets where there are no distractor classes [7].

3.3 Image segmentation

IIC can be applied to image segmentation identically to image clustering, except for two modifications. Firstly, since predictions are made for each pixel densely, clustering is applied to image patches (defined by the receptive field of the neural network for each output pixel) rather than whole images. Secondly, unlike with whole images, one has access to the spatial relationships between patches. Thus, we can add local spatial invariance to the list of geometric and photometric invariances in section 3.2, meaning we form pairs of patches not only via synthetic perturbations, but also by extracting pairs of adjacent patches in the image.

In detail, let the RGB image

be a tensor,

a pixel location, and a patch centered at . We can form a pair of patches by looking at location and its neighbour at some small displacement . The cluster probability vectors for all patches can be read off as the column vectors of the tensor , computed by a single application of the convolutional network . Then, to apply IIC, one simply substitutes pairs , in the calculation of the joint probability matrix (2).

The geometric and photometric perturbations used before for whole image clustering can be applied to individual patches too. Rather than transforming patches individually, however, it is much more efficient to transform all of them in parallel by perturbing the entire image. Any number or combination of these invariances can be chained and learned simultaneously; the only detail is to ensure indices of the original image and transformed image class probability tensors line up, meaning that predictions from patches which are intended to be paired together do so.

Formally, if the image transformation is a geometric transformation, the vector of cluster probabilities will not correspond to ; rather, it will correspond to because patch is sent to patch by the transformation. All vectors can be paired at once by applying the reverse transformation to the tensor , as For example, flipping the input image will require flipping the resulting probability tensor back. In general, the perturbation can incorporate geometric and photometric transformations, and only needs to undo geometric ones. The segmentation objective is thus:

(5)

Hence the goal is to maximize the information between each patch label and the patch label of its transformed neighbour patch, in expectation over images , patches within each image, and perturbations . Information is in turn averaged over all neighbour displacements (which was found to perform slightly better than averaging over before computing information; see supplementary material).

Implementation.

The joint distribution of eq. 5 for all displacements can be computed in a simple and highly efficient way. Given two network outputs for one batch of image pairs where , we first bring back into the coordinate-space of by using a bilinear resampler111

The core differentiable operator in spatial transformer networks 

[31][31], which inverts any geometrical transforms in , . Then, the inner summation in eq. 5 reduces to the convolution of the two tensors. Using any standard deep learning framework, this can be achieved by swapping the first two dimensions of each of and , computing

(a 2D convolution with padding

in both dimensions), and normalising the result to produce .

4 Experiments

STL10 CIFAR10 CFR100-20 MNIST
Random network 13.5 13.1 5.93 26.1
K-means [51] 19.2 22.9 13.0 57.2
Spectral clustering [48] 15.9 24.7 13.6 69.6
Triplets [44] 24.4 20.5 9.94 52.5
AE [5] 30.3 31.4 16.5 81.2
Sparse AE [39] 32.0 29.7 15.7 82.7
Denoising AE [46] 30.2 29.7 15.1 83.2
Variational Bayes AE [33] 28.2 29.1 15.2 83.2
SWWAE 2015 [52] 27.0 28.4 14.7 82.5
GAN 2015 [43] 29.8 31.5 15.1 82.8
JULE 2016 [50] 27.7 27.2 13.7 96.4
DEC 2016 [49] 35.9 30.1 18.5 84.3
DAC 2017 [8] 47.0 52.2 23.8 97.8
DeepCluster 2018 [7] 33.4 37.4 18.9 65.6
ADC 2018 [23] 53.0 32.5 16.0 99.2
IIC (best sub-head) 61.0 61.7 25.7 99.3
IIC (avg sub-head STD) 59.8 57.6 25.5 98.4
   0.844 5.01 0.462 0.652
Table 1: Unsupervised image clustering. Legend: Method based on k-means. Method that does not directly learn a clustering function and requires further application of k-means to be used for image clustering. Results obtained using our experiments with authors’ original code.
STL10
No auxiliary overclustering 44.0
Single sub-head () 57.6
No sample repeats () 52.3
Unlabelled data segment ignored 52.0
Full setting 61.0
Table 2: Ablations of IIC (unsupervised setting). Each row shows a single change from the full setting. The full setting has auxiliary overclustering, 5 initialisation heads, 5 sample repeats, and uses the unlabelled data subset of STL10.

We apply IIC to fully unsupervised image clustering and segmentation, as well as two semi-supervised settings. Existing baselines are outperformed in all cases. We also conduct an analysis of our method via ablation studies. For minor details see supplementary material.

4.1 Image clustering

Cat Dog Bird Deer Monkey Car Plane Truck
































Figure 5: Unsupervised image clustering (IIC) results on STL10. Predicted cluster probabilities from the best performing head are shown as bars. Prediction corresponds to tallest, ground truth is green, incorrectly predicted classes are red, and all others are blue. The bottom row shows failure cases.

[]table[0.27] STL10 Dosovitskiy 2014 [17] 74.2 SWWAE 2015 [52] 74.3 Dundar 2015 [18] 74.1 Cutout* 2017 [15] 87.3 DeepCluster 2018 [7] 73.4 ADC 2018 [23] 56.7 IIC plus finetune 88.8 []figure[0.68]       

Figure 6: Fully and semi-supervised classification. Legend: *Best purely supervised method. Our experiments with original authors’ code. Multi-fold training where average over training folds is reported (others use the full training set).
Figure 7: Semi-supervised overclustering. Training with IIC loss to overcluster () and using labels for evaluation mapping only. Performance is robust even with 90%-75% of labels discarded (left and center). STL10- denotes networks with output . Overall accuracy improves with the number of output clusters (right). For further details see supplementary material.

Datasets.

We test on STL10, which is ImageNet adapted for unsupervised classification, as well as CIFAR10, CIFAR100-20 and MNIST. The main setting is pure unsupervised clustering (IIC) but we also test two semi-supervised settings: finetuning and overclustering. For unsupervised clustering, following previous work [8, 49, 50], we train on the full dataset and test on the labelled part; for the semi-supervised settings, train and test sets are separate.

As for DeepCluster [7], we found Sobel filtering to be beneficial, as it discourages clustering based on trivial cues such as colour and encourages using more meaningful cues such as shape. Additionally, for data augmentation, we repeat images within each batch times; this means that multiple image pairs within a batch contain the same original image, each paired with a different transformation, which encourages greater distillation since there are more examples of which visual details to ignore (section 3.1). We set for all experiments. Images are rescaled and cropped for training (prior to applying transforms , consisting of random additive and multiplicative colour transformations and horizontal flipping) and a single center crop is used at test time for all experiments except semi-supervised finetuning, where 10 crops are used.

Architecture.

All networks are randomly initialised and consist of a ResNet or VGG11-like base (see sup. mat.), followed by one or more heads (linear predictors). Let the number of ground truth clusters be and the output channels of a head be . For IIC, there is a main output head with and an auxiliary overclustering head (fig. 2) with . For semi-supervised overclustering there is one output head with . For increased robustness, each head is duplicated times with a different random initialisation, and we call these concrete instantiations sub-heads. Each sub-head takes features from and outputs a probability distribution for each batch element over the relevant number of clusters. For semi-supervised finetuning (fig. 7), the base is copied from a semi-supervised overclustering network and combined with a single randomly initialised linear layer where .

Training.

We use the Adam optimiser [32] with learning rate . For IIC, the main and auxiliary heads are trained by maximising  eq. 3 in alternate epochs. For semi-supervised overclustering, the single head is trained by maximising eq. 3. Semi-supervised finetuning uses a standard logistic loss.

Evaluation.

We evaluate based on accuracy (true positives divided by sample size). For IIC we follow the standard protocol of finding the best one-to-one permutation mapping between learned and ground-truth clusters (from the main output head; auxiliary overclustering head is ignored) using linear assignment [34]. While this step uses labels, it does not constitute learning as it merely makes the metric invariant to the order of the clusters. For semi-supervised overclustering, each ground-truth cluster may correspond to the union of several predicted clusters. Evaluation thus requires a many-to-one discrete map from to , since . This extracts some information from the labels and thus requires separated training and test set. Note this mapping is found using the training set (accuracy is computed on the test set) and does not affect the network parameters as it is used for evaluation only. For semi-supervised finetuning, output channel order matches ground truth so no mapping is required. The performance of each sub-head is assessed independently, and best and average performances are reported.

Unsupervised learning analysis.

IIC is highly capable of discovering clusters in unlabelled data that accurately correspond to the underlying semantic classes, and outperforms all competing baselines at this task (table 1), with significant margins of and in the case of STL10 and CIFAR10. As mentioned in section 2, this underlines the advantages of end-to-end optimisation instead of using a fixed external procedure like k-means as with many baselines. The clusters found by IIC are highly discriminative (fig. 5), although note some failure cases; as IIC distills purely visual correspondences within images, it can be confused by instances that combine classes, such as a deer with the coat pattern of a cat. Our ablations (table 2) illustrate the contributions of various implementation details, and in particular the accuracy gain from using auxiliary overclustering.

Semi-supervised learning analysis.

For semi-supervised learning, we establish a new state-of-the-art on STL10 out of all reported methods by finetuning a network trained in an entirely unsupervised fashion with the IIC objective (recall labels in semi-supervised overclustering are used for evaluation and do not influence the network parameters). This explicitly validates the quality of our unsupervised learning method, as we beat even the supervised state-of-the-art (

fig. 7). Given that the bulk of parameters within semi-supervised overclustering are trained unsupervised (i.e. all network parameters), it is unsurprising that Figure 7 shows a 90% drop in the number of available labels for STL10 (decreasing the amount of labelled data available from 5000 to 500 over 10 classes) barely impacts performance, costing just 10% drop in accuracy. This setting has lower label requirements than finetuning because whereas the latter learns all network parameters, the former only needs to learn a discrete map between and , making it an important practical setting for applications with small amounts of labelled data.

4.2 Segmentation

1pt 40 [colback=white,size=fbox,on line]IIC [colback=white,size=fbox,on line]GT [colback=white,size=fbox,on line]IIC [colback=white,size=fbox,on line]GT [colback=white,size=fbox,on line]IIC* [colback=white,size=fbox,on line]GT [colback=white,size=fbox,on line]IIC* [colback=white,size=fbox,on line]GT   [colback=white,size=fbox,on line]IIC [colback=white,size=fbox,on line]IIC* [colback=white,size=fbox,on line]GT [colback=white,size=fbox,on line]IIC [colback=white,size=fbox,on line]IIC* [colback=white,size=fbox,on line]GT

Figure 8: Example segmentation results (un- and semi-supervised). Left: COCO-Stuff-3 (non-stuff pixels in black), right: Potsdam-3. Input images, IIC (fully unsupervised segmentation) and IIC* (semi-supervised overclustering) results are shown, together with the ground truth segmentation (GT).

Datasets.

Large scale segmentation on real-world data using deep neural networks is extremely difficult without labels or heuristics, and has negligible precedent. We establish new baselines on scene and satellite images to highlight performance on textural classes, where the assumption of spatially proximal invariance (section 3.3) is most valid. COCO-Stuff [6] is a challenging and diverse segmentation dataset containing “stuff” classes ranging from buildings to bodies of water. We use the 15 coarse labels and 164k images variant, reduced to 52k by taking only images with at least 75% stuff pixels. COCO-Stuff-3 is a subset of COCO-Stuff with only sky, ground and plants labelled. For both COCO datasets, input images are shrunk by two thirds and cropped to pixels, Sobel preprocessing is applied for data augmentation, and predictions for non-stuff pixels are ignored. Potsdam [30] is divided into 8550 RGBIR px satellite images, of which 3150 are unlabelled. We test both the 6-label variant (roads and cars, vegetation and trees, buildings and clutter) and a 3-label variant (Potsdam-3) formed by merging each of the 3 pairs. All segmentation training and testing sets will be released with our code.

COCO-Stuff-3 COCO-Stuff Potsdam-3 Potsdam
Random CNN 37.3 19.4 38.2 28.3
K-means [42] 52.2 14.1 45.7 35.3
SIFT [38] 38.1 20.2 38.2 28.5
Doersch 2015 [16] 47.5 23.1 49.6 37.2
Isola 2016 [29] 54.0 24.3 63.9 44.9
DeepCluster 2018 [7] 41.6 19.9 41.7 29.2
IIC 72.3 27.7 65.1 45.4
Table 3: Unsupervised segmentation. IIC experiments use a single sub-head. Legend: Method based on k-means. Method that does not directly learn a clustering function and requires further application of k-means to be used for image clustering.

Architecture.

All networks are randomly initialised and consist of a base CNN (see sup. mat.) followed by head(s), which are convolution layers. Similar to section 4.1, overclustering uses 3-5 times higher than . Since segmentation is much more expensive than image clustering (e.g. a single Potsdam image contains 40,000 predictions), all segmentation experiments were run with and (sec. 4.1).

Training.

The convolutional implementation of IIC (eq. 5) was used with . For Potsdam-3 and COCO-Stuff-3, the optional entropy coefficient (section 3.1 and sup. mat.) was used and set to 1.5. Using the coefficient made slight improvements of 1.2%-3.2% on performance. These two datasets are balanced in nature with very large sample volume (e.g. predictions per batch for Potsdam-3) resulting in stable and balanced batches, justifying prioritisation of equalisation. Other training details are the same as section 4.1.

Evaluation.

Evaluation uses accuracy as in section 4.1, computed per-pixel. For the baselines, the original authors’ code was adapted from image clustering where available, and the architectures are shared with IIC for fairness. For baselines that required application of k-means to produce per-pixel predictions (table 3), k-means was trained with randomly sampled pixel features from the training set (10M for Potsdam, Potsdam-3; 50M for COCO-Stuff, COCO-Stuff-3) and tested on the full test set to obtain accuracy.

Analysis.

Without labels or heuristics to learn from, and given just the cluster cardinality (3), IIC automatically partitions COCO-Stuff-3 into clusters that are recognisable as sky, vegetation and ground, and learns to classify vegetation, roads and buildings for Potsdam-3 (fig. 8). The segmentations are notably intricate, capturing fine detail, but are at the same time locally consistent and coherent across all images. Since spatial smoothness is built into the loss (section 3.3), all our results are able to use raw network outputs without post-processing (avoiding e.g. CRF smoothing[9]). Quantitatively, we outperform all baselines (table 3), notably by in the case of COCO-Stuff-3. The efficient convolutional formulation of the loss (eq. 5) allows us to optimise over all pixels in all batch images in parallel, converging in fewer epochs (passes of the dataset) without paying the price of reduced computational speed for dense sampling. This is in contrast to our baselines which, being not natively adapted for segmentation, required sampling a subset of pixels within each batch, resulting in increased loss volatility and training speeds that were up to 3.3 slower than IIC.

5 Conclusions

We have shown that it is possible to train neural networks into semantic clusterers without using labels or heuristics. The novel objective presented relies on statistical learning, by optimising mutual information between related pairs - a relationship that can be generated by random transforms - and naturally avoids degenerate solutions. The resulting models classify and segment images with state-of-the-art levels of semantic accuracy. Being not specific to vision, the method opens up many interesting research directions, including optimising information in datastreams over time.

References