ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation

by   Laurynas Karazija, et al.
University of Oxford

There has been a recent surge in methods that aim to decompose and segment scenes into multiple objects in an unsupervised manner, i.e., unsupervised multi-object segmentation. Performing such a task is a long-standing goal of computer vision, offering to unlock object-level reasoning without requiring dense annotations to train segmentation models. Despite significant progress, current models are developed and trained on visually simple scenes depicting mono-colored objects on plain backgrounds. The natural world, however, is visually complex with confounding aspects such as diverse textures and complicated lighting effects. In this study, we present a new benchmark called ClevrTex, designed as the next challenge to compare, evaluate and analyze algorithms. ClevrTex features synthetic scenes with diverse shapes, textures and photo-mapped materials, created using physically based rendering techniques. It includes 50k examples depicting 3-10 objects arranged on a background, created using a catalog of 60 materials, and a further test set featuring 10k images created using 25 different materials. We benchmark a large set of recent unsupervised multi-object segmentation models on ClevrTex and find all state-of-the-art approaches fail to learn good representations in the textured setting, despite impressive performance on simpler data. We also create variants of the ClevrTex dataset, controlling for different aspects of scene complexity, and probe current approaches for individual shortcomings. Dataset and code are available at vgg/research/clevrtex.



There are no comments yet.


page 4

page 7

page 9

page 22

page 23

page 24

page 25

page 26


Photorealistic Image Synthesis for Object Instance Detection

We present an approach to synthesize highly photorealistic images of 3D ...

APEX: Unsupervised, Object-Centric Scene Segmentation and Tracking for Robot Manipulation

Recent advances in unsupervised learning for object detection, segmentat...

Test Scene Design for Physically Based Rendering

Physically based rendering is a discipline in computer graphics which ai...

Semi-Supervised Learning of Multi-Object 3D Scene Representations

Representing scenes at the granularity of objects is a prerequisite for ...

Unified Perceptual Parsing for Scene Understanding

Humans recognize the visual world at multiple levels: we effortlessly ca...

Generalization and Robustness Implications in Object-Centric Learning

The idea behind object-centric representation learning is that natural s...

SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition

The ability to decompose complex multi-object scenes into meaningful abs...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Supervised scene understanding has seen significant progress in the last decade. The introduction of deep learning to the field and large, manually annotated datasets have made it possible to address tasks such as object detection 

Liu et al. (2020), semantic or instance segmentation He et al. (2017), layout prediction Xu et al. (2017)

and dense captioning 

Johnson et al. (2016) with considerable accuracy. However, in absence of labels, and thereby supervision, such tasks are exceedingly difficult, even though it is easy to imagine that with enough images (or videos), it should be possible to identify objects and the general composition of a scene without human annotations. This renders unsupervised multi-object segmentation, as well as object-centric learning a challenging yet promising field with high potential.

While certain tasks in the general context of unsupervised scene understanding and decomposition have a relatively long history in computer vision, the majority of applications focus on single objects: image classification Van Gansbeke et al. (2020); Ji et al. (2019); Caron et al. (2018), saliency detection Zhang et al. (2018); Nguyen et al. (2019), foreground/background segmentation Chen et al. (2019); Bielski and Favaro (2019); Voynov et al. (2020); Melas-Kyriazi et al. (2021) and general image-level representation learning He et al. (2020); Chen et al. (2020); Caron et al. (2021); Grill et al. (2020)

. These methods are usually developed on datasets such as ImageNet 

Russakovsky et al. (2015) that contain one object of interest per image. Nevertheless, most real-world scenes are often comprised of multiple objects in varying spatial configurations.

Only recently, methods have been developed to analyze and decompose whole scenes containing multiple objects, i.e., jointly learning to represent and segment objects from raw image input, without supervision. However, since moving from individual objects to complex scenes drastically complicates the problem, these methods currently rely on simple synthetic datasets. The complexity of these datasets ranges from simple, single-color 2D shapes arranged against a black background Burgess et al. (2019) to rendered 3D scenes composed of uniformly colored, 3D primitives (cubes, spheres, cylinders) Johnson et al. (2017) (Fig. 1). Interestingly, current methods work very well on this kind of data and saturate the existing benchmarks such that a quantitative comparison of models becomes difficult.

How to scale such methods to visually complex real-world data remains an open problem. When analyzing the current state-of-the-art methods and datasets, it becomes clear that there is a strong reliance on simple appearance (e.g., single color, simple shape). For example, Greff et al. (2019) identify a tendency of their model to segment by color, and it fails when applied to natural images. In fact, the majority of methods learn semantic objects using similar compositional principles, which exploit statistical advantages in aligning simple scene elements with internal representations. Natural images and the objects therein, however, do not possess strong, consistent colors. Instead, they feature confounding textures, often a mixture of repeating and irregular patterns, which might violate such assumptions.

This work introduces a dataset and benchmark as the next step towards eventually tackling real-world scenarios. We propose ClevrTex, a synthetic dataset that consists of textured foreground objects and background, unlike existing benchmarks. Interestingly, we find that simply moving from uniformly colored to textured objects poses extreme challenges for current models, and no existing method achieves satisfactory performance. For this reason, we also introduce several variants of our dataset to gradually scale the visual complexity of the scenes and investigate where current algorithms struggle. To probe the generalization capability of models to out-of-distribution scenes, we create additional test sets that contain unseen shapes and materials and camouflaged objects. Together with ClevrTex and its variants, we are releasing the code to generate the dataset from scratch. Finally, we find that existing work does not rely on a consistent set of metrics and benchmarks. In an extensive set of experiments, we benchmark the majority111wherever code was available or could be obtained from the authors of current work on both CLEVR and our newly introduced ClevrTex.

GQN Eslami et al. (2018)
ObjectRoom Burgess et al. (2019)
ShapeStacks Groth et al. (2018)
CLEVR Johnson et al. (2017)
Figure 1: Qualitative comparison of our new ClevrTex dataset with previous unsupervised multi-object learning datasets featuring 3D objects. See Table 1 for quantitative comparison.
Dataset #Images #Objects #Shapes #Obj. #Obj. #Backgrounds Annotations
Colors Materials
GQN (Eslami et al., 2018) 12M 1–3 7 1 15 Camera parameters
ObjectRoom (Burgess et al., 2019) 1M 1–6 4 10 1 100 Semantic, factor of variation
ShapeStacks (Groth et al., 2018) 310k 2–6 4 5 1 25 Semantic, stability, stability type
CLEVR (Johnson et al., 2017) 100k 3–10 3 8 2 1 Semantic, factors of variation
ClevrTex (Ours) 50k+10k 3–10 4+4 60+25 60+25
Semantic, depth, normal, shadow,
factors of variation
Table 1: Comparison of the proposed ClevrTex dataset with previous unsupervised multi-object learning datasets featuring 3D objects.

2 Related Work

Object recognition benchmarks such as PascalVOC Everingham et al. (2010)


Lin et al. (2014) have been fundamental to object detection research. However, the current unsupervised multi-object segmentation models are yet unable to handle diverse real-world images featured in such datasets and have relied on visually trivial 2D and 3D data. Here, we review datasets and benchmarks used in unsupervised multi-object segmentation methods and point out the differences to ClevrTex.

2D Datasets

Earlier unsupervised multi-object learning approaches were applied to transformed versions of existing 2D datasets, often originally crafted for disentanglement research, such as Shapes Reichert et al. (2011)

, variants of MNIST 

(LeCun et al., 1998): TexturedMNIST Greff et al. (2016) and MultiMNIST Sabour et al. (2017), as well as the multi-object version of dSprites Matthey et al. (2017), i.e., Multi-dSprites Burgess et al. (2019)

. Others borrow data from the reinforcement learning community, such as the ATARI game environment 

(Bellemare et al., 2013) or Tetrominoes Bozkurt et al. (2019). However, 2D datasets, whilst valuable for development, do not contain the visual cues and details (e.g. shadows and perspective) needed to learn object segmentation that generalizes to real images.

3D Datasets

Simple 3D Phong-shaded datasets (Fig. 1) have been crafted for use in the unsupervised multi-object setting. The object-room dataset Burgess et al. (2019), a multi-object extension of 3D shapes Burgess and Kim (2018), features colored shapes arranged in a room with colored walls. ShapeStacks (Groth et al., 2018) features stacked, solid-colored primitives on a simple background with a pattern. CLEVR Johnson et al. (2017), which is most closely related to our work, was introduced as a visual question-answering dataset but has become a popular benchmark in unsupervised scene decomposition as well. It features a set of 3-10 primitive shapes arranged on a gray photo backdrop; objects can have either a rubbery or metallic appearance and one of 8 color tints. CLEVRGreff et al. (2019) is a filtered version of the CLEVR dataset that includes only up to 6 objects per image. It is often used for training in multi-object representation learning, with the remainder of CLEVR used to test generalization to more crowded scenes Locatello et al. (2020); Emami et al. (2021).

Additional variants of CLEVR have also been generated for other tasks, such as ARROW (Jiang and Ahn, 2020) for exploring scene composition accuracy, and a recursive version in Deng et al. (2021) for learning part-whole relationships. Multi-view variations Kosiorek et al. (2021); Stelzner et al. (2021) are used for 3D representation learning, and further include new object geometry, such as toys Li et al. (2020) and chairs Yu et al. (2021). However, these datasets feature simple scenes of low visual complexity, with contrasting solid colors present on objects. ClevrTex instead contains difficult objects with various materials that include repeating patterns and small details and often blend in rather than stand out from the background.

The main differences in data statistics between ClevrTex and commonly used multi-object learning datasets are also summarised in Table 1.

Unsupervised Multi-Object Segmentation in Natural Scenes

Some attempts have also been made to scale to natural scenes. Eslami et al. (2016) apply the AIR model modified with a 3D rendering engine to infer identities and positions of crockery items on a table, training on simulated data, and evaluating against real-world images. Monnier et al. (2021) test their sprite-based method on foreground/background segmentation on the Weizmann Horse dataset (Borenstein and Ullman, 2004). Engelcke et al. (2021) apply Genesis-V2 to robotic manipulation datasets, Sketchy and APC (Zeng et al., 2017). Sketchy (Cabi et al., 2019) features recordings of a robotic arm manipulating solid colored toys, towels, or other small objects on a test table, but it lacks segmentation masks. The APC (Zeng et al., 2017) dataset is used instead for evaluation but only contains a single foreground object. These attempts signal promise that unsupervised multi-object segmentation can eventually scale to diverse real-world images.

Visual Fidelity in Simulation

Simulation has always been central to progress in machine/reinforcement learning. However, as usual, the gap between a simulated setting and the ability to generalize to real-world environments is of concern. Several new simulators aim to improve the visual fidelity using photo-mapped environments or artists’ compositions (Savva et al., 2017; Kolve et al., 2017; Xia et al., 2018; Manolis Savva* et al., 2019). Recently, TWD (Gan et al., 2021) introduced a rich physics engine and PBR rendering of environments with a library of objects. Similar to our work, the emphasis is partly on increasing visual fidelity while moving away from trivial settings and towards real-world applications. However, RL environments have not seen much use in the unsupervised vision domain due to the often specific nature of the data, egocentric perspective, and temporal dependency.

3 ClevrTex

We introduce ClevrTex, a simulated dataset designed to present the next challenge in unsupervised multi-object learning. It introduces confounding visual aspects such as texture, irregular shapes, and various materials while maintaining control over scene composition. ClevrTex is available under CC-BY license.

3.1 Dataset Creation

ClevrTex is a much more visually complex extension of CLEVR (Johnson et al., 2017) targeted at multi-object learning. It is procedurally generated using the API of Blender222

, a powerful open-source 3D suite.

At the center of the ClevrTex generation process is a catalog of diverse photo-mapped materials333We use the computer graphics term “material” to refer to the collection of resources used to creates the likeness of appropriate real-world material on simulated surfaces. Materials are typically a composition of various modalities, such as normal, diffuse, specular, and displacement maps, as well as a computation graph and shaders. We use the term “texture” to refer to 2D images mapping color information onto 3D surfaces. ranging from forest floor duff, rocks, brickwork, and tiles to fabrics, metallic weaves, and meshes — a full list of materials is shown in  Section C.5). To generate each image, we start with a scene containing only a photo backdrop, which will become the background. For viewpoint and lighting diversity, we apply random jitter to the position of the camera and three lights. We then fill the scene with 3 to 10 objects (number sampled uniformly), sampling each object from a set of shapes: a cube, a sphere, a cylinder, and a non-symmetric shape of anthropomorphized monkey head444A modified version of Suzzane – a prefab shape available in Blender. for increased complexity in object silhouettes. Objects are added to the scene one by one by sampling position (continuous, ), scale (discrete, ), and rotation (continuous, ). If a new object collides with already existing shapes in the scene, the object’s transformation is resampled until no collision is found or a maximum number of trials is exceeded, at which the process restarts by removing all objects.

Figure 2: ClevrTex and its variants.

We then sample a material for each object and the background. Using adaptive subdivision, we create material-specific geometry by displacing vertices of the starting shapes. This creates reliefs for simpler materials or distorts shapes, extruding features or introducing holes. The materials use albedo, subsurface scattering, and reflectivity maps to generate detailed visuals. Using physically based rendering ensures appropriately detailed reflections, highlights, and lighting effects. In addition, we generate ground truth segmentation maps through the rendering process and automatically check that no object is fully occluded. In that case, the scene is resampled from scratch. Further figures depicting scene lighting, objects, their scales and deformations are available in Section C.5.

The object shapes and placement mimics that of the CLEVR dataset (Johnson et al., 2017) for backward compatibility. We do not generate the question-answering part of the original CLEVR dataset but include full metadata. This means that this dataset could also be used for other CLEVR-based tasks such as question answering, although this is not our focus here. Similarly, in anticipation that our dataset might also find usages beyond its intended setting, we include depth, albedo, shadow, and normal maps alongside the images, segmentation maps, and metadata. We share the code to generate ClevrTex alongside the dataset.

3.2 Statistics

ClevrTex contains images, of which we use 10% for testing, 10% for validation and the remaining 80% ( images) for training. Each image contains between three and ten objects (uniformly sampled). There are four possible shapes, which have been modified to enable clean texture mapping. We use three distinct object scales to maintain identifiable size “names”, as in CLEVR, and custom meshes to ensure that the scaling of the objects does not distort texture details. The object placement and rotation are sampled from a continuous range. Note that even though two shapes — cylinder and sphere — are rotationally symmetric, the materials applied to them are not. We use a catalog of 60 materials with non-commercial licenses to generate the whole dataset before splitting the data into training sets. The materials are manually adjusted to ensure visually pleasing results at different scales and the background.

3.3 Variants

We create the following modifications of ClevrTex, each with images (see Fig. 2), to enable a more detailed analysis and evaluation and probe methods for their shortcomings.

The first variant, PlainBG, is a dataset consisting of textured objects on a plain background, i.e., the background is always set to a simple material as in CLEVR. We also create the reverse version, VarBG (varied background), where the objects are assigned simple CLEVR-like materials and colors while the background receives a textured material at random from our material catalog. PlainBG and VarBG fall in-between CLEVR and ClevrTex in terms of visual complexity. In PlainBG, intra-object appearance is more complex, but each object clearly stands out from the plain background. On the other hand, VarBG maintains uniformly colored objects but introduces background texture, effectively making the background more diverse than the foreground. PlainBG and VarBG can be used to analyze the importance of background vs. object reconstruction.

Furthermore, we create GrassBG, which contains scenes with the same mossy grass material as the background, while foreground objects receive materials at random. This variant is thus comparable to ClevrTex in terms of visual complexity. However, consistency in the background allows for testing memorization vs. reconstruction effects.

In addition, we propose the following two test sets to serve as an extra check for the limitations of ClevrTex.

CAMO contains scenes with “camouflaged” objects. To simulate this, every scene is made of a single, randomly sampled material that is used on all objects and the background. CAMO is created to challenge the internal-vs-external consistency and the efficiency hypothesis that underpins compositional methods. The only visual cues here are lighting, shadows, and perspective. It should enable probing models to see if they rely on such context to identify objects. Although we release CAMO with training, validation and test splits, in our experiments it is only used as a testbed for models trained on ClevrTex.

Finally, we also provide a separate OOD (out-of-distribution) dataset to evaluate model generalization on novel scenes. This dataset is designed exclusively as a test set and thus only contains images. OOD is generated the same way as ClevrTex, but exclusively uses 25 new (unseen) materials — i.e. different from the 60 already used in other variants — and four new shapes (cone, torus, icosahedron, and a teapot) that are not part of ClevrTex.

4 Models

In recent years, there has been a surge of methods that aim to decompose a scene into objects in an unsupervised manner and, at the same time, learn object-centric representations. Following Lin et al. (2020), we categorize these methods as follows.

Pixel-Space Approaches ()

A common way to frame the problem of unsupervised scene decomposition into objects is to assign each pixel to one of a usually fixed number of scene components, inferring per-pixel membership maps Greff et al. (2016, 2017, 2019); Burgess et al. (2019); Yang et al. (2020); Emami et al. (2021). While these methods are probabilistic in nature, they do not lend themselves to generating new images. For this reason, several generative methods have been proposed, where images can be sampled from the learned distributions Engelcke et al. (2020, 2021). Finally, Locatello et al. (2020) introduce a discriminative approach using an iterative clustering-like slot attention mechanism.

Here, we benchmark MONet (Burgess et al., 2019) and IODINE (Greff et al., 2019) as examples of earlier approaches that handle 3D colored scenes. We also evaluate the improved efficient MORL (eMORL) (Emami et al., 2021), Genesis-v2 Engelcke et al. (2021) as a generative model, and Slot Attention (Locatello et al., 2020) which is representative for discriminative models.

Glimpse-Based Methods ()

An alternative to predicting components for each pixel is to extract patches of the input—named glimpses—that contain objects. A dense segmentation can be derived in this reduced space. These glimpses are arranged on top of an explicit background to reconstruct the image. Glimpse-based methods Eslami et al. (2016); Crawford and Pineau (2019); Lin et al. (2020); Jiang and Ahn (2020); Deng et al. (2021) tend to offer computational advantages due to smaller regions, however also require deciding, extracting and composing patches.

Model Train. Time Inf. Time Peak GPU
(GPU h) (ms) Mem (GB)
GNM (Jiang and Ahn, 2020) 54 4
SPACE (Lin et al., 2020) 64 8
SPAIR* (Crawford and Pineau, 2019) 77 11
DTI (Monnier et al., 2021) 198 11
MN (Smirnov et al., 2021) 11
IODINE (Greff et al., 2019)
SA (Locatello et al., 2020) 290 17
MONet (Burgess et al., 2019)
eMORL (Emami et al., 2021)
GenV2 (Engelcke et al., 2021) 194 15
Table 2: Computational resources for different models. indicates number of GPUs needed. Measured on NVIDIA P40 24GB GPUs, with original batch sizes and input. Train. time refers to time required to train the models for the recommended number of iterations, measured in total GPU hours. Inf. time measures the mean inference time required for a single batch, shown over 7 passes.

From the glimpse-based methods, we benchmark SPAIR (Crawford and Pineau, 2019), which models glimpses auto-regressively, using a truncated geometric prior. Since it cannot handle non-black backgrounds, we modify the model to include a VAE for background prediction (SPAIR*). We also evaluate SPACE (Lin et al., 2020) due to its use of the pixel-space approach for processing the background, and GNM (Jiang and Ahn, 2020), which uses scene-level priors.

Sprite-Based Methods ()

Recently, several methods (Smirnov et al., 2021; Monnier et al., 2021) propose to decompose images into a learned dictionary of RGBA sprites instead of learning a generative model. From the alpha masks of each sprite, the scene segmentation can be recovered. We benchmark MarioNette (Smirnov et al., 2021) and DTISprites (Monnier et al., 2021) to investigate the differences of two sprite-based () approaches.

The aforementioned models have highly varying computational requirements. We offer a side-by-side comparison in Table 2, where computational advantages to glimpse-based methods can be immediately seen, with methods such as GNM and SPACE taking a fraction of time and memory required by even single-GPU pixel-space methods. All implementation details, hyper-parameters, and model changes are reported in Section C.3.

5 Experiments


We benchmark a wide spectrum of methods using ClevrTex and its variants. To test generalization, we evaluate models trained on ClevrTex using OOD and CAMO. In addition to our ClevrTex and its variants, we conduct experiments on CLEVR to provide a complete side-by-side comparison of methods and the new challenges in ClevrTex. All implementation details and preprocessing are reported in Section C.1.


The majority of previous work has used the adjusted Rand index on foreground pixels (ARI-FG) only as an evaluation metric. We share concerns with

Monnier et al. (2021); Engelcke et al. (2020) that this metric does not reflect how well objects are localized by the model and whether they are considered part of the background. Thus, we report mean intersection over union (mIoU) instead, as it considers the background. Further discussion and a side-by-side comparison of ARI-FG and mIoU can be found in Section C.2. Furthermore, we judge the quality of the reconstruction output of the models using the mean squared error (MSE). For the models trained on CLEVR and ClevrTex

, we report results on three random seeds, including their standard deviation.

Figure 3: Comparison of various models’ reconstruction and segmentation outputs on CLEVR, ClevrTex and our test sets. Best viewed digitally. More results in the Appendix, Fig. 5.

5.1 Benchmark

mIoU (%) MSE mIoU (%) MSE mIoU (%) MSE mIoU (%) MSE
SPAIR* (Crawford and Pineau, 2019)
SPACE (Lin et al., 2020)
GNM (Jiang and Ahn, 2020)
MN (Smirnov et al., 2021)
DTI (Monnier et al., 2021)
GenV2 (Engelcke et al., 2021)
eMORL (Emami et al., 2021)
eMORL (Emami et al., 2021)
MONet (Burgess et al., 2019)
SA (Locatello et al., 2020)
IODINE (Greff et al., 2019)
Table 3: Benchmark results on CLEVR and ClevrTex and the generalization test sets CAMO, and OOD. Results shown calculated over 3 runs. updated eMORL: after ClevrTex was released, the authors of (Emami et al., 2021) have updated their codebase to include ClevrTex training and evaluation and shared their trained models with improved performance (single seed on CLEVR).

The results for the benchmark are detailed in Table 3 and in Fig. 3. Next, we discuss our findings regarding the ability of models to separate foreground and background, to handle textured scenes, as well as their training stability and generalizability to new scenes.

Background Segmentation

Pixel-space methods () show impressive performance on CLEVR compared against glimpse-based approaches () on the foreground (see Fig. 3). However, if we consider the ability to segment the background (mIoU in Table 3), their performance advantage disappears, with SPAIR* performing the best. We attribute this to the tendency of pixel-space models to assign parts of the background to nearby objects. In glimpse-based methods, however, the formation of glimpses forces the objects to be spatially compact, which offers an advantage when separating the objects from the background.

Textured Scenes

When training on ClevrTex, all models struggle. The foreground segmentation performance reduces, indicating that models fail to assign whole objects to a single component, likely due to the tendency to overfit consistent color regions. The overall segmentation performance is worse as well. MSE is much higher than on CLEVR, with models producing blurry or flat reconstructions, failing to capture much of the rich variation in the input data. SPAIR*, which showed the best overall performance on CLEVR, fails to recognize any objects and instead simply predicts the background. We conjecture that SPAIR’s autoregressive handling of objects paired with the use of spatial transformers might make the learning signal too noisy.

Sprite-based models () also perform worse, as the greater variation in appearances is not sufficiently captured by their limited dictionary. While the dictionary size can be increased, the lack of an internal compression mechanism to represent varied appearances will always be a limiting factor in natural world settings. Interestingly, when unable to capture individual objects, MN learns to tile the image with possible color blobs, representing low-frequency information in the image instead. In our tests, similar tiling behavior tends to occur also in glimpse-based models whenever they cannot learn to reconstruct the foreground (see the Appendix, Fig. 6, for examples in other models). Since DTI includes a set of internal transformations, it performs comparatively better on ClevrTex.

GNM, a generative glimpse-based approach, has overall the best performance on ClevrTex, which we attribute to spatial-locality constraints imposed through the glimpse-based formulation and limited background reconstruction ability due to a simpler background model; i.e. comparing to other methods less capacity is spent on the background. Interestingly, GNM shows one of the largest reconstruction errors, despite being the best at scene segmentation, suggesting that ignoring confounding aspects of the scene rather than representing them might aid in the overall task.

Out of the our benchmarked pixel-space methods (), IODINE performs the best in terms of the overall segmentation performance. Our qualitative investigation shows that pixel-space methods that can segment textured scenes largely capture consistent color regions, which occasionally align with objects on scenes with simpler materials. Large patterns in the background or changes in object appearance, often due to lighting result in oversegmentation.


Due to inherent stochasticity in initialization and optimization, one can expect a degree of variation between different model training runs. Many benchmarked models in this study also rely on internal randomness, primarily due to the sampling procedures involved. This influences the learning signal and the configuration the models can learn. Pixel-based approaches and SPACE (which has pixel-space model for background) show higher variance in the performance metrics. Similar to

(Locatello et al., 2020; Emami et al., 2021; Lin et al., 2020)

, we observe that these methods occasionally fail to use separate components, which causes high fluctuation between different seeds. Glimpse-based methods are more stable with respect to seeds but tend to exhibit higher sensitivity to hyperparameter settings.


In addition to benchmarking existing approaches in their ability to learn and handle textured scenes, we are also interested in the degree to which different approaches might rely on specific factors of ClevrTex. To this end, we evaluate the models trained on the ClevrTex on two additional test sets: CAMO to see whether models rely on the difference of object appearances present in a scene, and OOD to see whether a degree of memorization (e.g. of shapes and materials) plays a role in recognition and whether the methods could generalize to unseen patterns.

Interestingly, some of the better performing approaches on ClevrTex maintain much of their segmentation ability on out-of-distribution (OOD) data. GNM, for example, attempts to reconstruct the input using memorized training data materials and shapes, which leads to reduced but still comparable object segmentation. Other sprite- () and glimpse-based () methods either do not perform well or show similar reliance on the appearances from the training distribution. Pixel-space models () show a better ability to reconstruct the input but also tend to reconstruct based on consistent color regions rather than objects, a tendency only exacerbated by the out-of-distribution setting.

When considering the challenging CAMO setting, none of the approaches perform satisfactory segmentation. Methods that somewhat work on ClevrTex tend to use different components to represent lighter and darker parts of the scene, highlighting the tendency of all current models to overfit the scene appearance.

5.2 Variants

Figure 4: Comparison of various models’ reconstruction and segmentation outputs on PlainBG, VarBG and GrassBG variants. Best viewed digitally.
Model PlainBG VarBG GrassBG
mIoU (%) MSE mIoU (%) MSE mIoU (%) MSE
SPAIR* (Crawford and Pineau, 2019) 39.32 134 0.00 1246 0.00 728
SPACE (Lin et al., 2020) 31.96 120 16.10 311 33.85 196
GNM (Jiang and Ahn, 2020) 26.49 96 49.78 438 53.15 254
MN (Smirnov et al., 2021) 10.16 167 11.51 441 34.80 266
DTI (Monnier et al., 2021) 36.03 210 38.82 498 37.65 215
GenV2 (Engelcke et al., 2021) 24.39 98 14.40 298 2.88 306
eMORL (Emami et al., 2021) 29.39 96 22.92 385 19.38 199
MONet (Burgess et al., 2019) 38.72 83 23.73 212 21.29 165
SA (Locatello et al., 2020) 39.32 134 62.57 257 12.88 116
IODINE (Greff et al., 2019) 23.83 128 39.86 364 25.76 225
Table 4: Model results on PlainBG,VarBG, and GrassBG variants.

As discussed above, many of the models that perform well on CLEVR, either do not work on ClevrTex or lose much of their performance. To further probe which aspects of the scene composition are challenging, we use the variants of ClevrTex.

Textured Objects

When applied to PlainBG, where materials are only seen on objects, and the background is gray, all of the methods still perform worse than on CLEVR, with a significant drop in segmentation performance, especially prevalent in pixel-space approaches (). Since all methods have been designed with simpler datasets and uniformly colored objects, the more realistic nature of the materials in ClevrTex poses a difficult challenge. Glimpse-based models () also show reduced segmentation quality over CLEVR. MN (sprite-based) struggles as the increased diversity in foreground objects overwhelms the spite dictionary. Finally, the models’ inability to capture the fine-grained details of the more complex object appearance causes the increase in reconstruction error.

Textured Background

VarBG contains simple mono-colored objects arranged on top of a diverse set of textured backgrounds. Certain models, like SPAIR*, SPACE, and GenV2, struggle to handle diverse backgrounds. Other methods, however, seem to benefit from simpler objects, showing improvements in segmentation performance over both PlainBG and ClevrTex scenarios, indicating that these models rely on simpler, more consistent objects.

Consistent Background

GrassBG has the same complex forest grass background in all scenes. The background is richer and more complex than in PlainBG. As glimpse-space methods () tend to model the background explicitly, we observe that contrasting consistent background aids these models greatly. Pixel-space methods () also perform slightly better in this setting than on ClevrTex where the background varies. However, the effect is not as pronounced as for glimpse-based () approaches, with the overall performance roughly matching what was observed on ClevrTex.

6 Conclusions

Unsupervised object learning and scene segmentation is a challenging task. Interestingly, given the existing metrics and commonly used datasets (e.g., CLEVR), current approaches show impressive performance, yet we have shown that they are easily challenged when visual complexity increases. To this end, we present ClevrTex, a new benchmark that aims to increase visual scene complexity, which contains richer textures, materials, and shapes, to encourage progress towards methods applicable to real images in the wild.

In our experiments, GNM Jiang and Ahn (2020) and IODINE Greff et al. (2019) perform the best out of glimpse-based and pixel-space models, respectively, with GNM showing the best segmentation performance overall. However, almost all methods struggle to handle multiple textured scenes, resulting in a significant performance gap with respect to the closest current benchmark, CLEVR. Our findings suggest that pixel-space methods tend to be more prone to overfitting consistent color regions and smooth gradients. On the other hand, sprite- and glimpse-based approaches tend to memorize small repeated patterns, which offers an advantage on ClevrTex. Further testing, however, shows that these models reconstruct smooth backgrounds and recognize sharp changes as objects. As such, even the approaches that show some ability to handle textured environments focus largely on scene appearance, failing to learn and exploit global context clues that might align with semantic objects.

We believe that textures pose a challenge to current pixel-space and glimpse-based methods as they are built to exploit simple visual elements and uniform appearance that is present in previous datasets, partly due to the reconstruction objectives. We find evidence for this in our experiments with the dataset variants: consistency within objects, as seen in our VarBG variant, and consistency in backgrounds (PlainBG and GrassBG) helps to learn better models than the full ClevrTex where there is no simple intra- and inter-appearance consistency. Only on simpler scenes (Fig. 3) the best performing methods succeed at segmenting some objects.

Thus, ClevrTex offers new challenges for unsupervised multi-object segmentation, especially for evaluating generalization. Furthermore, the three variants and two additional test sets can serve as a diagnostic tool for developing new methods, and the extensive evaluation acts as a standardized benchmark for current and future methods.


The proposed dataset contains a limited number of primitive shapes and a catalog of 60 materials. Although future models might exploit the non-exhaustive nature of object appearance, e.g., memorizing object reconstructions than learning generalizable scene decompositions, we have shown that current methods are, in fact, faced with a significant challenge, even at a slight increase of data complexity (e.g., on PlainBG). To further address this limitation, we have created the OOD dataset, which should serve as an additional test for the generalization ability of models outside the training distribution. Overall, ClevrTex is still a synthetic dataset and does not fully close the gap to real-world data. However, until methods can solve ClevrTex, generalization to real images is likely out of reach.

Broader Impact

The work presented here critically evaluates current approaches for unsupervised multi-object segmentation. The introduced datasets are fully simulated renderings of 3D primitives and do not contain any people or personal information. Our benchmark aims to establish and standardize evaluation practices, provide new challenges for current algorithms, and help future research compare with prior work. While ClevrTex is highly important for current research, its impact outside of the research community is low as current methods can not yet properly deal with real images.

L. K. is funded by EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems EP/S024050/1. I. L. is supported by the European Research Council (ERC) grant IDIU-638009 and EPSRC VisualAI EP/T028572/1. C. R. is supported by Innovate UK (project 71653) on behalf of UK Research and Innovation (UKRI) and by the ERC IDIU-638009. We thank Johnson et al. (2017) for their open-source implementation of CLEVR. We would also like to thank Martin Engelcke for helpful suggestions on applying Genesis-V2 to ClevrTex, Patrick Emami for assistance adapting eMORL to ClevrTex and Dmitriy Smirnov for sharing their implementation of MarioNette.



  • Bellemare et al. [2013] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents.

    Journal of Artificial Intelligence Research

    , 47:253–279, 2013.
  • Bielski and Favaro [2019] Adam Bielski and Paolo Favaro. Emergence of object segmentation in perturbed generative models. In Advances in Neural Information Processing Systems, volume 32, 2019.
  • Borenstein and Ullman [2004] Eran Borenstein and Shimon Ullman. Learning to segment. In European conference on computer vision, pages 315–328. Springer, 2004.
  • Bozkurt et al. [2019] Alican Bozkurt, Babak Esmaeili, Jennifer Dy, Dana Brooks, and Jan-Willem van de Meent. Tetrominoes dataset., 2019.
  • Burgess and Kim [2018] Chris Burgess and Hyunjik Kim. 3d shapes dataset., 2018.
  • Burgess et al. [2019] Christopher P Burgess, Loic Matthey, Nicholas Watters, Rishabh Kabra, Irina Higgins, Matt Botvinick, and Alexander Lerchner. Monet: Unsupervised scene decomposition and representation. arXiv preprint arXiv:1901.11390, 2019.
  • Cabi et al. [2019] Serkan Cabi, Sergio Gómez Colmenarejo, Alexander Novikov, Ksenia Konyushkova, Scott Reed, Rae Jeong, Konrad Zolna, Yusuf Aytar, David Budden, Mel Vecerik, et al. Scaling data-driven robotics with reward sketching and batch reinforcement learning. In Proceedings of Robotics: Science and Systems, 2019.
  • Caron et al. [2018] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze.

    Deep clustering for unsupervised learning of visual features.

    In Proceedings of the European Conference on Computer Vision (ECCV), pages 132–149, 2018.
  • Caron et al. [2021] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 9650–9660, 2021.
  • Chen et al. [2019] Mickaël Chen, Thierry Artières, and Ludovic Denoyer. Unsupervised object segmentation by redrawing. In Advances in Neural Information Processing Systems, volume 32, 2019.
  • Chen et al. [2020] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In

    International conference on machine learning

    , pages 1597–1607. PMLR, 2020.
  • Crawford and Pineau [2019] Eric Crawford and Joelle Pineau.

    Spatially invariant unsupervised object detection with convolutional neural networks.

    In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3412–3420, 2019.
  • Deng et al. [2021] Fei Deng, Zhuo Zhi, Donghun Lee, and Sungjin Ahn. Generative scene graph networks. In International Conference on Learning Representations, 2021.
  • Emami et al. [2021] Patrick Emami, Pan He, Sanjay Ranka, and Anand Rangarajan. Efficient iterative amortized inference for learning symmetric and disentangled multi-object representations. In Proceedings of the 38th International Conference on Machine Learning, pages 2970–2981. PMLR, 2021.
  • Engelcke et al. [2020] Martin Engelcke, Adam R Kosiorek, Oiwi Parker Jones, and Ingmar Posner. Genesis: Generative scene inference and sampling with object-centric latent representations. In International Conference on Learning Representations, 2020.
  • Engelcke et al. [2021] Martin Engelcke, Oiwi Parker Jones, and Ingmar Posner. Genesis-v2: Inferring unordered object representations without iterative refinement. In Advances in Neural Information Processing Systems, volume 34, 2021.
  • Eslami et al. [2016] S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Koray Kavukcuoglu, and Geoffrey E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. In Proceedings of the 30th International Conference on Neural Information Processing Systems, page 3233–3241, 2016.
  • Eslami et al. [2018] SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo, Avraham Ruderman, Andrei A Rusu, Ivo Danihelka, Karol Gregor, et al. Neural scene representation and rendering. Science, 360(6394):1204–1210, 2018.
  • Everingham et al. [2010] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
  • Gan et al. [2021] Chuang Gan, Jeremy Schwartz, Seth Alter, Damian Mrowca, Martin Schrimpf, James Traer, Julian De Freitas, Jonas Kubilius, Abhishek Bhandwaldar, Nick Haber, Megumi Sano, Kuno Kim, Elias Wang, Michael Lingelbach, Aidan Curtis, Kevin Tyler Feigelis, Daniel Bear, Dan Gutfreund, David Daniel Cox, Antonio Torralba, James J. DiCarlo, Joshua B. Tenenbaum, Josh Mcdermott, and Daniel LK Yamins. ThreeDWorld: A platform for interactive multi-modal physical simulation. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, 2021.
  • Gebru et al. [2018] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. Datasheets for datasets. arXiv preprint arXiv:1803.09010, 2018.
  • Greff et al. [2016] Klaus Greff, Antti Rasmus, Mathias Berglund, Tele Hao, Harri Valpola, and Jürgen Schmidhuber. Tagger: Deep unsupervised perceptual grouping. In Advances in Neural Information Processing Systems, pages 4484–4492, 2016.
  • Greff et al. [2017] Klaus Greff, Sjoerd van Steenkiste, and Jürgen Schmidhuber.

    Neural expectation maximization.

    In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 6694–6704, 2017.
  • Greff et al. [2019] Klaus Greff, Raphaël Lopez Kaufman, Rishabh Kabra, Nick Watters, Christopher Burgess, Daniel Zoran, Loic Matthey, Matthew Botvinick, and Alexander Lerchner. Multi-object representation learning with iterative variational inference. In International Conference on Machine Learning, pages 2424–2433. PMLR, 2019.
  • Grill et al. [2020] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733, 2020.
  • Groth et al. [2018] Oliver Groth, Fabian B Fuchs, Ingmar Posner, and Andrea Vedaldi. Shapestacks: Learning vision-based physical intuition for generalised object stacking. In Proceedings of the European Conference on Computer Vision (ECCV), pages 702–717, 2018.
  • He et al. [2017] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
  • He et al. [2020] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    , pages 9729–9738, 2020.
  • Ji et al. [2019] Xu Ji, Joao F. Henriques, and Andrea Vedaldi. Invariant information clustering for unsupervised image classification and segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.
  • Jiang and Ahn [2020] Jindong Jiang and Sungjin Ahn. Generative neurosymbolic machines. In Advances in Neural Information Processing Systems, volume 33, pages 12572–12582, 2020.
  • Johnson et al. [2016] Justin Johnson, Andrej Karpathy, and Li Fei-Fei. Densecap: Fully convolutional localization networks for dense captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4565–4574, 2016.
  • Johnson et al. [2017] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2901–2910, 2017.
  • Kolve et al. [2017] Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474, 2017.
  • Kosiorek et al. [2021] Adam R Kosiorek, Heiko Strathmann, Daniel Zoran, Pol Moreno, Rosalia Schneider, Soňa Mokrá, and Danilo J Rezende. NeRF-VAE: A geometry aware 3d scene generative model. In Proceedings of the 38th International Conference on Machine Learning, pages 5742–5752. PMLR, 2021.
  • LeCun et al. [1998] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • Li et al. [2020] Nanbo Li, Cian Eastwood, and Robert Fisher. Learning object-centric representations of multi-object scenes from multiple views. In Advances in Neural Information Processing Systems, volume 33, pages 5656–5666, 2020.
  • Lin et al. [2014] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755, 2014.
  • Lin et al. [2020] Zhixuan Lin, Yi-Fu Wu, Skand Vishwanath Peri, Weihao Sun, Gautam Singh, Fei Deng, Jindong Jiang, and Sungjin Ahn. SPACE: Unsupervised object-oriented scene representation via spatial attention and decomposition. In International Conference on Learning Representations, 2020.
  • Liu et al. [2020] Li Liu, Wanli Ouyang, Xiaogang Wang, Paul Fieguth, Jie Chen, Xinwang Liu, and Matti Pietikäinen. Deep learning for generic object detection: A survey. International journal of computer vision, 128(2):261–318, 2020.
  • Locatello et al. [2020] Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. Object-centric learning with slot attention. In Advances in Neural Information Processing Systems, volume 33, pages 11525–11538, 2020.
  • Manolis Savva* et al. [2019] Manolis Savva*, Abhishek Kadian*, Oleksandr Maksymets*, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. Habitat: A Platform for Embodied AI Research. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.
  • Matthey et al. [2017] Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprites: Disentanglement testing sprites dataset., 2017.
  • Melas-Kyriazi et al. [2021] Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, and Andrea Vedaldi. Finding an unsupervised image segmenter in each of your deep generative models. arXiv preprint arXiv:2105.08127, 2021.
  • Monnier et al. [2021] Tom Monnier, Elliot Vincent, Jean Ponce, and Mathieu Aubry. Unsupervised layered image decomposition into object prototypes. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 8640–8650, 2021.
  • Nguyen et al. [2019] Duc Tam Nguyen, Maximilian Dax, Chaithanya Kumar Mummadi, Thi Phuong Nhung Ngo, Thi Hoai Phuong Nguyen, Zhongyu Lou, and Thomas Brox. Deepusps: Deep robust unsupervised saliency prediction with self-supervision. In Advances in Neural Information Processing Systems, 2019.
  • Reichert et al. [2011] David P Reichert, Peggy Series, and Amos J Storkey. A hierarchical generative model of recurrent object-based attention in the visual cortex. In

    International Conference on Artificial Neural Networks

    , pages 18–25, 2011.
  • Rezende and Viola [2018] Danilo Jimenez Rezende and Fabio Viola. Taming VAEs. arXiv preprint arXiv:1810.00597, 2018.
  • Russakovsky et al. [2015] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
  • Sabour et al. [2017] Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between capsules. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 3859–3869, 2017.
  • Savva et al. [2017] Manolis Savva, Angel X. Chang, Alexey Dosovitskiy, Thomas Funkhouser, and Vladlen Koltun. MINOS: Multimodal indoor simulator for navigation in complex environments. arXiv:1712.03931, 2017.
  • Smirnov et al. [2021] Dmitriy Smirnov, Michael Gharbi, Matthew Fisher, Vitor Guizilini, Alexei A Efros, and Justin Solomon. Marionette: Self-supervised sprite learning. In Advances in Neural Information Processing Systems, volume 34, 2021.
  • Stelzner et al. [2021] Karl Stelzner, Kristian Kersting, and Adam R Kosiorek. Decomposing 3d scenes into objects via unsupervised volume segmentation. arXiv preprint arXiv:2104.01148, 2021.
  • Van Gansbeke et al. [2020] Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool.

    Scan: Learning to classify images without labels.

    In European Conference on Computer Vision, pages 268–285. Springer, 2020.
  • Voynov et al. [2020] Andrey Voynov, Stanislav Morozov, and Artem Babenko. Big gans are watching you: Towards unsupervised object segmentation with off-the-shelf generative models. arXiv preprint arXiv:2006.04988, 2020.
  • Watters et al. [2019] Nicholas Watters, Loic Matthey, Christopher P Burgess, and Alexander Lerchner. Spatial broadcast decoder: A simple architecture for learning disentangled representations in vaes. arXiv preprint arXiv:1901.07017, 2019.
  • Xia et al. [2018] Fei Xia, Amir R Zamir, Zhiyang He, Alexander Sax, Jitendra Malik, and Silvio Savarese. Gibson env: Real-world perception for embodied agents. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9068–9079, 2018.
  • Xu et al. [2017] Danfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5410–5419, 2017.
  • Yang et al. [2020] Yanchao Yang, Yutong Chen, and Stefano Soatto. Learning to manipulate individual objects in an image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6558–6567, 2020.
  • Yu et al. [2021] Hong-Xing Yu, Leonidas J. Guibas, and Jiajun Wu. Unsupervised discovery of object radiance fields. arXiv preprint arXiv:2107.07905, 2021.
  • Zeng et al. [2017] Andy Zeng, Kuan-Ting Yu, Shuran Song, Daniel Suo, Ed Walker, Alberto Rodriguez, and Jianxiong Xiao.

    Multi-view self-supervised deep learning for 6d pose estimation in the amazon picking challenge.

    In 2017 IEEE international conference on robotics and automation (ICRA), pages 1386–1383. IEEE, 2017.
  • Zhang et al. [2018] Jing Zhang, Tong Zhang, Yuchao Dai, Mehrtash Harandi, and Richard Hartley. Deep unsupervised saliency detection: A multiple noisy labeling perspective. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9029–9038, 2018.

Appendix A Dataset Documentation: Datasheets for Datasets

Here we answer the questions outlined in the datasheets for datasets paper by Gebru et al. [2018].

a.1 Motivation

For what purpose was the dataset created?

ClevrTex was created to serve as the next challenging benchmark for unsupervised multi-object segmentation methods. It trades simpler visuals for confounding aspects such as texture, irregular shapes, and a variety of materials.

Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organisation)?

The dataset has been constructed by the research group “Visual Geometry Group” at the Engineering Science Department, University of Oxford.

Who funded the creation of the dataset?

The dataset is created for research purposes at VGG. L. K. is funded by EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems EP/S024050/1. I. L. is supported by the EPSRC programme grant Seebibyte EP/M013774/1 and ERC starting grant IDIU-638009. C. R. is supported by Innovate UK (project 71653) on behalf of UK Research and Innovation (UKRI) and by the European Research Council (ERC) IDIU-638009.

a.2 Composition

What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)?

The dataset consists of images featuring simulated scenes and segmentation, depth, normal, albedo, and shadow masks available, and metadata detailing scene composition.

How many instances are there in total (of each type, if appropriate)?

There are instances in the main ClevrTex dataset. in each variant, PlainBG, VarBG, GrassBG and CAMO. There is also a further instances in the testing-only variant OOD.

Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?

The dataset is a sample of the near-infinite set of possible arrangements under our sampling distribution. Please see Section 3.1 for a description of the process to sample the scene.

What data does each instance consist of?

Each instance consists of the RGB scene image, depth, normal, albedo, and shadow masks (all PNG), and further metadata (JSON) detailing object positions, shapes, scales, and materials used. We use only the RBG image for training during the benchmarking process and segmentation masks and metadata to evaluate.

Is there a label or target associated with each instance?

For the task explored in this paper, unsupervised multi-object segmentation, the target labels are the segmentation masks, which are not used during training.

Is any information missing from individual instances?


Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)?

No, there are no relationships between different instances.

Are there recommended data splits (e.g., training, development/validation, testing)?

Yes, we adopt 10%/10%/80% test/val/train splits for the datasets by instance index, with the exception of OOD variant, which is used for evaluation only. The rationale behind splits is that the data comes from the same generation process for each variant and can already be considering randomized. Simply using an image index to separate the splits makes both data-loading easy and removes the need to distribute canonical split indexes.

Are there any errors, sources of noise, or redundancies in the dataset?


Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)?

The dataset is self-contained.

Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)?


Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?


Does the dataset relate to people? If not, you may skip the remaining questions in this section.


Does the dataset identify any subpopulations (e.g., by age, gender)?


Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?


Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?


a.3 Collection process

How was the data associated with each instance acquired?

The data was generated.

What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?

The images were rendered using Blender 2.9.3 software on generic systems.

If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)?

See the similar question in the Composition section.

Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?

The authors were involved in the process of generating this dataset.

Over what timeframe was the data collected?

The datasets were rendered over a period of several weeks.

Were any ethical review processes conducted (e.g., by an institutional review board)?


Does the dataset relate to people? If not, you may skip the remainder of the questions in this section.


a.4 Preprocessing/cleaning/labeling

Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?

No, the dataset was generated together with labels.

Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)?


Is the software used to preprocess/clean/label the instances available?


a.5 Uses

Has the dataset been used for any tasks already?

In the paper we show and benchmark the intended use of this dataset for unsupervised multi-object segmentation setting.

Is there a repository that links to any or all papers or systems that use the dataset?

We will be listing these on the website.

What (other) tasks could the dataset be used for?

We include additional information maps when generating this dataset, which could be used for exploring value of using extra modalities for supervision or as targets. As mentioned before, we also generated necessary metadata for CLEVR-like QA task.

Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?


Are there tasks for which the dataset should not be used?

This dataset is meant for research purposes only.

a.6 Distribution

Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created?


How will the dataset will be distributed (e.g., tarball on website, API, GitHub)?

The dataset and related evaluation code is available on the website allowing users to download and read-in the data.

When will the dataset be distributed?

The dataset is available now.

Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?


Have any third parties imposed IP-based or other restrictions on the data associated with the instances?

The original textures used in rendering objects are copyrighted by Poliigon Pty Ltd and cannot be redistributed to a third party. This only applies to texture images used in creating this dataset. The materials used for main dataset are freely available under non-commercial license and we include instructions to retrieve them alongside the generation code. Textures used in evaluation-only OOD variant are not available free of charge (we obtained them under a commercial license), but their catalogue is similarly included with the code. The dataset instances themselves do not have IP-based restrictions.

Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?

Not that we are are of. Regular UK laws apply.

a.7 Maintenance

Who is supporting/hosting/maintaining the dataset?

The dataset is supported by the authors and by the VGG research group. The main contact person is Laurynas Karazija.

How can the owner/curator/manager of the dataset be contacted (e.g., email address)?

The authors of this dataset can be reached at their e-mail addresses: {laurynas,chrisr,iro}

Is there an erratum?

If errors are found and erratum will be added to the website.

Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)?

Any potential future updates or extension will be communicated via the website. The dataset will be versioned.

If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)?


Will older versions of the dataset continue to be supported/hosted/maintained?

We plant to continue hosting older versions of the dataset.

If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?

Yes, we make the dataset generation code available.

a.8 Other questions

Is your dataset free of biases?


Can you guarantee compliance to GDPR?

No, we are unable to comment on legal issues.

a.9 Author statement of responsibility

The authors confirm all responsibility in case of violation of rights and confirm the licence associated with the dataset and its images.

Appendix B Dataset

The dataset can be accessed at In ClevrTex and its variants, each instance contains:

  1. RBG scene image

  2. semantic mask image

  3. depth mask image

  4. shadow mask image

  5. albedo mask image

  6. normal mask image

  7. Metadata JSON, which further details:

    1. number of objects

    2. background material

    3. shape of each object

    4. size of each object

    5. rotation of each object

    6. scene (3D) coordinates of each object

    7. image (2D) coordinates of each object

    8. material of each object

    9. color (only relevant on VarBG) of each object

    10. scene directions (CLEVR metadata)

    11. object relationships (CLEVR metadata)

All images are provided as PNG. We also provide code for reading in the dataset and evaluation utilities for general performance metrics and per-shape/material/size breakdown. The dataset is provided under the CC-BY license.

Appendix C Supplementary Material

c.1 Data

All images are center-cropped to a patch and further downsampled to pixels as a pre-processing step before being fed to the models. This introduces partially visible objects in the datasets, removes uninteresting empty edges of the scenes, and lowers the computational load. Many of the benchmarked models were developed to work with such resolution. We include helper code to load our datasets for convenience. For CLEVR we are using a version that includes segmentation masks for evaluation555Available at, for which we adopt the standard 70k/15k/15k train/validation/test splits.

c.2 Metrics

As previously mentioned, prior work [Greff et al., 2019, Engelcke et al., 2020, Locatello et al., 2020] evaluated using the adjusted Rand index (ARI) metric calculated only on pixels that correspond to the foreground objects, filtered using ground-truth data. We share the concern of some authors [Engelcke et al., 2020, Monnier et al., 2021] that such evaluation protocol does not account for whether objects are considered a part of the background and how well models segment object boundaries. Instead, we opt for the mIoU metric, familiar from the supervised segmentation setting. The predicted objects are matched with ground truth segments using the Hungarian matching algorithm, which assigns only a single predicted component to each true mask, maximizing overall overlap. A mean is taken over all objects, including the background. We provide side-by-side comparison of these metrics on all benchmarked models in Tables 6 and 5. We chose mIoU in favor of ARI metric, as it weights all objects equally irrespective of their size. ARI is based on counting pairs, thus it gives larger regions such backgrounds more weight.

ARI-FG (%) mIoU (%) ARI-FG (%) mIoU (%) ARI-FG (%) mIoU (%) ARI-FG (%) mIoU (%)
SPAIR* [Crawford and Pineau, 2019]
SPACE [Lin et al., 2020]
GNM [Jiang and Ahn, 2020]
MN [Smirnov et al., 2021]
DTI [Monnier et al., 2021]
GenV2 [Engelcke et al., 2021]
eMORL [Emami et al., 2021]
MONet [Burgess et al., 2019]
SA [Locatello et al., 2020]
IODINE [Greff et al., 2019]
Table 5: Benchmark results on CLEVR, ClevrTex, CAMO, and OOD comparing ARI-FG and mIoU metrics. Results are shown calculated over 3 runs.
Model PlainBG VarBG GrassBG
ARI-FG (%) mIoU (%) ARI-FG (%) mIoU (%) ARI-FG (%) mIoU (%)
SPAIR* [Crawford and Pineau, 2019] 51.75 39.32 0.05 0.00 0.00 0.00
SPACE [Lin et al., 2020] 34.25 31.96 29.36 16.10 32.52 33.85
GNM [Jiang and Ahn, 2020] 40.73 26.49 66.79 49.78 67.31 53.15
MN [Smirnov et al., 2021] 38.34 10.16 43.64 11.51 59.79 34.80
DTI [Monnier et al., 2021] 77.74 36.03 81.56 38.82 82.37 37.65
GenV2 [Engelcke et al., 2021] 85.33 24.39 66.04 14.40 21.12 2.88
eMORL [Emami et al., 2021] 52.00 29.39 50.18 22.92 69.64 19.38
MONet [Burgess et al., 2019] 57.10 38.72 51.87 23.73 37.97 21.29
SA [Locatello et al., 2020] 51.75 39.32 89.78 62.57 43.55 12.88
IODINE [Greff et al., 2019] 54.32 23.83 75.33 39.86 66.91 25.76
Table 6: Results on PlainBG,VarBG, and GrassBG variants, comparing ARI-FG and mIoU metrics.

c.3 Hyper-parameters

Where available in PyTorch, we use the official implementation for the benchmarked methods. Otherwise, we use a re-implementation, checked against the original method, and further verify that it produces similar results to those reported in the corresponding papers. Where the original methods have been applied to

CLEVR (or its variant), we employ the same hyper-parameter configuration for CLEVR. For other datasets or methods that have not been trained on CLEVR, we follow a best-effort approach to tuning hyper-parameters.

For MONet Burgess et al. [2019], we reduced the batch size from 64 to 63 (). IODINE  Greff et al. [2019] and MONet were trained for 300k iterations instead of 1M as we noticed that no changes to learned configurations, running loss, or performance improvements were taking place after 250k iterations. For MONet, IODINE we found the original configuration worked well enough. For SPACE Lin et al. [2020], we concentrated on finding a suitable setting for output standard deviation for foreground and background networks. Despite higher values being crucial for both Genesis and GNM models, we could not identify a configuration that produced better results than the original 0.15 in our exploration. The following describes any adjustments made to the original configurations for other models.

Slot Attention Locatello et al. [2020]

We use 11 slots on all tests. We varied the number of attention iterations. We have found the model to perform the best when trained using 3. We maintained the original learning rate, batch size, and optimizer settings and trained for the suggested 500k iterations.

Efficient MORL [Emami et al., 2021]

We increase the number of components to 11 and change the input resolution to to be inline with other methods studied. GECO reconstruction target is further adjusted to account for change in resolution. We use the value of for CLEVR and PlainBG. We use higher values of for VarBG and GrassBG and for ClevrTex, OOD, and CAMO, due to more complex backgrounds. We considered a set of , selecting the best performing ones. eMORL: following the release of ClevrTex, the codebase of eMORL has been updated including configuration settings for ClevrTex. The authors provided us with trained models that show better performance (Table 3) in our evaluation.

Gnm Jiang and Ahn [2020]

We use a slot grid with total of 16 slots and a latent dimension of 64 for objects and 10 for background. We found the model extremely sensitive to the output standard deviation. We found values 0.2 on CLEVR and 0.5 on ClevrTex worked well. It is worth noting that in our testing, with values of 0.4 and 0.6, GNM could not learn to segment the scene. We trained for 300k iteration.

GenesisV2 Engelcke et al. [2021]

We focused our hyper-parameter selection on the output standard deviation and GECO [Rezende and Viola, 2018] objective. On CLEVR we used GECO goal of 0.5655 and output standard deviation of 0.7, which was crucial for model to learn as lower values did not produce good segmentations. On ClevrTex we lowered the GECO goal to 0.5, which outperformed CLEVR setting.

Conv Encoder
Layer Size/Ch. Act. Comment
Conv 32 ReLU stride 2
Conv 32 ReLU stride 2
Conv 64 ReLU stride 2
Conv 64 ReLU stride 2
Avg P
MLP 128 ReLU
MLP Softplus for only
Broadcast Decoder
Layer Size/Ch. Act. Comment
Broadcast add coord.
Conv 32 ReLU

no pad

Conv 32 ReLU no pad
Conv 32 ReLU no pad
Conv 32 ReLU no pad
Conv 4 Sigmoid for masks only
Table 7: Architecture of component networks changed in SPAIR*.

Spair* Crawford and Pineau [2019]

As mentioned before, we incorporated a background VAE network into SPAIR by using a convolutional encoder and a spatial broadcast decoder Watters et al. [2019]. We also replaced MLP-based glimpse decoder with a similar spatial broadcast decoder. Additionally, we added an extra convolution in the backbone network to handle inputs of size. In this configuration, SPAIR had 16 slots. We set the latent dimension of objects to 64, and background to 1 on CLEVR and 4 on ClevrTex

. We trained for 250k iterations using a batch size of 128, Adam optimizer, learning rate of 1e-4, with gradient clipping when norm exceeded 1.0. We used

value of 2.7. On CLEVR we used the output standard deviation of 0.15. On ClevrTex, we annealed the value from 0.5 to 0.15 over 50k iterations. On CLEVR, the object presence prior hyper-parameter was annealed from 0.0001 to 0.99 over 10k, on ClevrTex, over 50k iterations.

DTISprites Monnier et al. [2021]

On CLEVR, we used the setting used for CLEVR6 in the original work except for increasing the possible number of objects. We found that using ten slots leads to better segmentation results than setting to 11 as with other models (one more than the max number of objects). On ClevrTex, we used color and protective transforms for both sprites and backgrounds.

MarioNette Smirnov et al. [2021]

We adjusted the model to learn to select and use from a dictionary of backgrounds, same as sprites. Additionally, we lowered the layer size to 4, using two layers, which gives 32 possible slots of size . On ClevrTex, we increased the sizes of both background and sprite dictionaries to as large as would fit in GPU memory. We trained with 60 sprites and single background on CLEVR, PlainBG, and GrassBG increasing the number of backgrounds to 60 on VarBG and ClevrTex.

c.4 Extra Figures

Here, we include extra figures listing additional output for all benchmarked models on CLEVR, ClevrTex, test sets and variants (Fig. 5). Fig. 6 contains example output of sprite- and glimpse-based models when they fail to learn correct foreground and background elements and learn to tile the image instead.

Figure 5: Comparison of various model reconstruction and segmentation outputs on CLEVR, ClevrTex and variants. Best viewed digitally.


MN Smirnov et al. [2021]

GNM Jiang and Ahn [2020]

SPACE Lin et al. [2020]
Figure 6: Tiling behaviour common to glimpse- () and sprite-based () models. Such tiling occurred whenever the model could not reproduce the foreground and background elements with respective component networks to sufficient accuracy. The models are trained on ClevrTex. GNM is shown here trained with output .

c.5 Dataset Construction

The main method of how the dataset is constructed is described in Section 3.1. Here, we include additional figures to showcase some steps in the dataset creation and provide catalog of materials used.


Fig. 8 shows the possible range of randomizing light positions in the scene, from warm closeup light positions with lots of shadows falling onto other objects to distant lights casting small soft shadows onto background even in crowded scenes. Fig. 8 also shows 4 possible shapes at 3 possible scales used in the ClevrTex.

Shape Adjustments

ClevrTex features only 4 simple objects. This is mitigated by a range of material-specific geometry adjustments, bumping and transparency mapping applied to the seed shapes. Fig. 9 shows the effect of the shape perturbations in a scene where no other material properties have been applied to the objects.


The camera position is jitterred along with lights. We use a perspective camera with a focal length of 0.035m and 0 shift.

(a) Background materials
(b) Object materials
Figure 7: Distribution of 60 materials in ClevrTex dataset between train/val/test splits, shown as a percentage. (a) shows distribution for the background. (b) shows distribution for objects.

Dataset Splits

ClevrTex and variants are split into test/val/train datasets using 10%/10%/80% proportions after generation. The splits are made based on the index of the example, that is first 10% form test split. This simple scheme is motivated by the uniform sampling of the scene composition. Fig. 7 shows that this results in roughly proportional distribution of materials for both backgrounds and objects across dataset splits. OOD variant is test-only.

Figure 8: Effects of jittering light positions in the scenes. The images show two extremes with the mean position in the middle. The images also contain a showcase of 4 shapes present in the main ClevrTex dataset at 3 possible scales. The scenes are rendered without any materials.
Figure 9: Showcase of a diverse set of shape perturbations applied the basic cube (top left) through a combination of displacement mapping, bumping and transparency mapping. Other material properties are not applied to the objects to show only the displacement details.


Fig. 10 contains the list of 60 materials used in generating ClevrTex and its PlainBG, VarBG, GrassBG, and CAMO variants. Please see our generation code for further information. Fig. 11 contains 25 materials used in OOD variant.

Figure 10: Materials used in ClevrTex dataset.
Figure 11: Materials used in OOD dataset variant.