Towers of Babel: Combining Images, Language, and 3D Geometry for Learning Multimodal Vision

08/12/2021 ∙ by Xiaoshi Wu, et al. ∙ 11

The abundance and richness of Internet photos of landmarks and cities has led to significant progress in 3D vision over the past two decades, including automated 3D reconstructions of the world's landmarks from tourist photos. However, a major source of information available for these 3D-augmented collections—namely language, e.g., from image captions—has been virtually untapped. In this work, we present WikiScenes, a new, large-scale dataset of landmark photo collections that contains descriptive text in the form of captions and hierarchical category names. WikiScenes forms a new testbed for multimodal reasoning involving images, text, and 3D geometry. We demonstrate the utility of WikiScenes for learning semantic concepts over images and 3D models. Our weakly-supervised framework connects images, 3D structure, and semantics—utilizing the strong constraints provided by 3D geometry—to associate semantic concepts to image pixels and 3D points.



There are no comments yet.


page 1

page 3

page 7

page 8

page 12

page 15

page 16

page 17

Code Repositories


Towers of Babel: Combining Images, Language, and 3D Geometry for Learning Multimodal Vision. ICCV 2021.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Internet photos capturing tourist landmarks around the world have driven research in 3D computer vision for over a decade

[52, 19, 17, 2, 48, 34]. Diverse photo collections of landmarks are unified by the underlying 3D scene geometry, despite the fact that a scene can look dramatically different from one image to the next due to varying illumination, alternating seasons, or special events. This geometric anchoring can be exploited when learning a range of geometry-related vision tasks, such as novel view synthesis [35, 29], single-view depth prediction [28], and relighting [60, 59]

, that require large amounts of diverse training data. However, prior work on tourist photos of landmarks has focused almost exclusively on lower-level reconstruction tasks, and not on higher-level scene understanding or recognition tasks.

We seek to connect such 3D-augmented image collections to a new domain: language. Natural language is an effective way to describe the complexities of the 3D world; 3D scenes exhibit features such as compositionality and physical and functional relationships that are easily captured by language. For instance, consider the images of the Barcelona and Reims Cathedrals in Fig. 1. Cathedrals like these have common elements, such as facades, columns, arches, portals, domes, etc., that tend to be physically assembled in consistent ways across all cathedrals (and related buildings like basilicas). Using modern structure from motion methods, we can reconstruct 3D models of the world’s cathedrals, but we cannot directly infer such rich semantic connections that exist across all cathedrals. Such reasoning calls for methods that jointly consider language, images, and 3D geometry.

However, despite impressive progress connecting images to natural language descriptions across tasks such as image captioning

[57, 32, 4] and visual grounding [56, 23, 20], little attention has been given to joint analysis of 3D vision and language. In this work, we facilitate such multimodal analysis with a new framework for creating 3D-augmented datasets from Wikimedia Commons, a diverse, crowdsourced and freely-licensed large-scale data source. We use this framework to create WikiScenes, a new dataset that contains 63K paired images and textual descriptions capturing 99 cathedrals, along with their associated 3D reconstructions, illustrated in Fig. 1. WikiScenes enables a range of new explorations at the intersection of language, vision, and 3D.

We demonstrate the utility of WikiScenes for the specific task of mining and learning semantic concepts over collections of images and 3D models. Our key insight is that while raw textual descriptions represent a weak, noisy form of supervision for semantic concepts, the underlying 3D structure of scenes yields powerful physical constraints that grants robustness to data noise and can ground models. In particular, we devise a novel 3D contrastive loss that leverages scene geometry to regularize learning of semantic representations. blackWe also show that 3D scene geometry leads to improved vision-language models in a caption-based image retrieval task, where geometry helps in augmenting the training data with semantically-related samples.

In summary, our key contributions are:

  • WikiScenes, a large-scale dataset combining language, images, and 3D models, which can facilitate research that jointly considers these modalities.

  • A contrastive learning method for learning semantic image representations leveraging 3D models.

  • Results that demonstrate that our proposed model can associate semantic concepts with images and 3D models, even for never-before-seen locations.

2 Related Work

Joint analysis of 3D and language. We have recently seen pioneering efforts to jointly analyze 3D and language. Chen  [11] learn a joint embedding of text and 3D shapes belonging to the ShapeNet dataset [9], and demonstrate these embeddings on text-to-shape retrieval and text-to-shape generation. Achlioptas  [1] learn language for differentiating between shapes. To do so, they generate a dataset consisting of triplets of ShapeNet chairs with utterances distinguishing one chair from the other two. In contrast to these object-centric works, Chen  [10] consider full 3D scenes. They construct a multimodal dataset for indoor scenes and localize 3D objects in the scene using natural language. We also consider 3D scenes, but in our case, the 3D scenes capture complex architectural landmarks, and their images and textual descriptions are gathered from Wikimedia Commons.

Vision and language. Many recent works connect images to natural language descriptions. Popular tasks include instruction following [5, 36, 8], visual question answering [6, 16, 24, 4], and phrase localization [33, 58, 54]. However, prior work has shown that models combining vision and language often rely on simple signals or fail to jointly consider both modalities. For instance, visual question answering techniques often ignore the image content [3]

, and visually-grounded syntax acquisition methods essentially learn a simple noun classifier 

[26]. We assemble Internet collections that are grounded to a 3D model, providing physical constraints that can better connect language and vision.

Figure 2: Images paired with hierarchical WikiCategories from the root (top) to the leaf (bottom).

Distilling information from Internet collections. Several works mine Internet collections capturing famous landmarks for objects [18, 43], events [45], or named parts [55] using image clustering techniques. Other work analyzes camera viewpoints in large-scale tourist imagery to automatically summarize a scene [51] or segment it into components [50].

Other prior work analyzes image content together with textual tags, geotags, and other metadata to organize image collections. Crandall use image features and user tags from geotagged Flickr images to discover and classify world landmarks [14]. 3D Wikipedia analyzes textual descriptions of tourist landmarks, leveraging photo co-occurrences to annotate specific 3D models like the Pantheon [46]. In contrast to the above methods, which operate on each location in isolation, our work aims to discover semantic concepts spanning a whole category of locations, such as all the world’s cathedrals. We further use a contrastive learning framework for detecting these concepts in unseen landmarks.

3 The WikiScenes Dataset

Our WikiScenes dataset consists of paired images and language descriptions capturing world landmarks and cultural sites, with associated 3D models and camera poses. WikiScenes is derived from the massive public catalog of freely-licensed crowdsourced data available in Wikimedia Commons,222 which contains a large variety of images with captions and other metadata. Within Wikimedia Commons, landmarks are organized into a hierarchy of semantic categories. In this work, we focus on cathedrals as a showcase of our framework, although our methodology is general and can be applied to other types of landmarks. We will also release companion datasets featuring mosques and synagogues.

To create WikiScenes, we first assembled a list of cathedrals using prior work on mining landmarks from geotagged photos [14]. Each cathedral corresponds to a specific category on Wikimedia Commons, at which is rooted a hierarchy of sub-categories that each contain photos and other relevant information. We refer to a Wikimedia Commons category as a WikiCategory. For example, “Cathédrale Notre-Dame de Paris”333 is the name of a WikiCategory corresponding to the Notre Dame Cathedral in Paris. It has a descendent WikiCategory called “Nave of Notre-Dame de Paris”444 that features photos of the nave (a specific region of a cathedral interior), as well as yet more detailed WikiCategories. Each landmark’s root WikiCategory node contains “Exterior”, “Interior” and “Views” subcategories. We download all images and associated descriptions under these subcategories. We extract two forms of textual descriptions for each image:

  • Captions associated with images, describing the image using free-form language (Figure 1).

  • The WikiCategory hierarchy associated with each image. Example hierarchies are shown in Figure 2.

Because data stored in Wikimedia Commons is not specific to any single language edition of Wikipedia, our dataset contains text in numerous languages, allowing for future multilingual tasks like learning of cross-lingual representations [53]. However, one can also train with text from a single language, such as English. Overall, WikiScenes contains K images of cathedrals with textual descriptions.

Candidates from captions

Candidates from the leaf categories

Distilled concepts

Figure 3: We visualize the raw text captured in WikiScenes captions (left) and leaf tags (center). Larger words are more frequent in the dataset. Our distilled concepts, obtained according to the algorithm described in Sec. 4.1, are listed on the right.

We integrate these Wikimedia Commons–sourced images with 3D reconstructions of landmarks built using COLMAP [48], a state-of-the-art SfM system that reconstructs camera poses and sparse point clouds. For each 3D point in the reconstructed scene, we track all its associated images and corresponding pixel locations. In total, K images of cathedrals were successfully registered in 3D. Example 3D reconstructions are shown in Figure 1.

Dataset statistics. WikiScenes is assembled from cathedrals spanning five continents and 23 countries. The languages most common in the captions are English (), French () and Spanish (). The Notre Dame Cathedral in Paris represents the largest subset, with 5,700 images-description pairs. The median number of words in a caption is seven; the average is significantly higher as some captions contain detailed excerpts about their associated landmark. of all captions contain at least one spatial connector,555The spatial connectors we consider are: above, over, below, under, beside, behind, from, towards, left, right, east and west. suggesting that our captions describe rich relationships between different parts of a structure. Please see the supplemental material for detailed distributions over attributes including language and collection size.

4 Mining WikiScenes for Semantic Concepts

To demonstrate the semantic knowledge encoded in our dataset, we mine WikiScenes for semantic concepts associated with the Cathedral landmark category. While the raw textual descriptions are noisy, we show that we can distill a clean set of concepts by exploiting within-scene 3D constraints (Sec. 4.1). We then associate these concepts to images (Sec.  4.2

), and show that these concepts can be used to train neural networks to visually recognize these concepts.

4.1 Distilling semantic concepts

To determine a set of candidate concepts, we first assemble a list of all nouns found in the leaf nodes of the WikiCategories, hereby denoted as the leaf categories, as empirically we found that the leaf categories are most representative of the image content. Since we are interested in a list of abstract

concepts and not in detecting specific places and objects, we filter out nouns detected as entities using an off-the-shelf Named Entity Recognition (NER) tagger

[44]. Figure 3

(middle) visualizes the initial candidate list as a word cloud (more frequent words appear larger). As the figure illustrates, this list contains nouns that indeed describe semantic regions in the “Cathedral” category, but also contains many outliers, or nouns not specifically related to the “Cathedral” category, such as “view” or “photograph”.

As an alternative, we can also extract nouns directly from the captions (Figure 3, left). This results in a noisier list, as the captions are generally longer with more diverse and detailed descriptions. In addition, leveraging category names leads to more images with noun descriptions—over K images have at least one noun in their leaf category, whereas only K images have an English caption with a noun.

To distill a clean set of semantic concepts from the initial list, we identify and select concepts that pass two tests: they are (1) well-supported in the collection (i.e., they occur frequently in the textual descriptions) and (2) coherent, in the sense that they consistently reference identical or visually similar elements. While well-supported concepts can be determined by simple frequency measurements, coherence is more difficult to assess from noisy Internet images and their descriptions. However, because these images are physically grounded via a 3D model, we can measure coherence in 3D.

For each candidate concept, e.g., “facade”, we construct multiple visual adjacency graphs (one per landmark) over the images associated with that concept. Note that an image can be associated with multiple concepts, according to the nouns detected in its leaf category. For each graph, nodes correspond to images and two images are connected by an edge if they share at least common keypoints in the 3D model (where is empirically set to ). We are interested in measuring the degree to which the images of the candidate concept are clustered together in 3D. Therefore, for each landmark , we compute the graph density:


The coherence of the candidate concept is measured as the average graph density , obtained by taking the average over all corresponding landmark graphs with at least nodes.

Finally, candidate concepts that appear in at least landmarks (roughly a quarter of the “Cathedral” category) and have a coherency score are added to our distilled set (Figure 3, right).

4.2 Associating images with distilled concepts

Although the distilled set of semantic concepts is constructed only from text appearing in the leaf categories, we utilize both the image captions and leaf categories when generating labels: an image is associated with a concept if the concept is present either in the caption or in its leaf categories. An image can be associated with multiple concepts.

One exception is that text often includes concepts that are spatially related to the main concept present in an image using spatial connectors such as “beside”, “next”, “from”, “towards”. For example, an image associated with the text “nave looking towards portal” should be associated with “nave”, but not necessarily with “portal”. Hence, we do not associate concepts with images if the concept appears anywhere after a spatial connector.

5 Learning Semantic Representations

Figure 4: Overview of our contrastive learning framework. Given an image pair with shared keypoints (left), we jointly train a model to classify the images into one of the concepts from the learned score maps and to output a higher similarity for pixels mapping to the same point in 3D (in blue). Negative pairs are constructed by sampling non-corresponding points from other images in the batch.

WikiScenes can be used to study a range of different problems. Here, we focus on semantic reasoning over 2D images and 3D models. In the previous section, we proposed a technique for discovering semantic concepts and associating these with images in WikiScenes. Now, we show how these image-level pseudo-labels can provide a supervision signal for learning semantic feature representations over an entire category of landmarks.

We seek to learn pixel-wise representations (in contrast to whole-image representations), because we wish to easily map knowledge from 2D to 3D and vice versa. We would also like our learned representations to be semantically meaningful. In other words, our distilled concepts should be identifiable from these pixel-wise representations. To this end, we devise a contrastive learning framework that computes a feature descriptor for every pixel in the image. We also show how our trained model can be directly utilized to estimate feature descriptors for 3D points through their associated images.

5.1 Training objectives

Our training data consists of image pairs with shared keypoints, obtained from the corresponding SfM model. We use convolutional networks with shared weights to extract dense feature maps and whose width and height match those of the original images. For simplicity of notation, we assume both images have dimensions . To train a feature descriptor model with such data, we propose to use two complimentary loss terms: a novel 3D contrastive loss that utilizes within-scene physical constraints and a classification loss (Figure 4).

3D contrastive loss. We design a new 3D contrastive loss to encourage within-scene consistency, such that pixels from different images corresponding to the same 3D point should have similar features. blackThis is unlike prior works on contrastive learning that use handcrafted data augmentations [13, 21] or synthetic images [41] to generate positive pairs—in our case the positive pairs are 2D pixels that are projections of the same point in 3D. This loss relates images with different characteristics, such as lighting and scale, allowing to better focus on semantics and providing higher robustness against such nuisance factors.

Our learning method works as follows: For each point in corresponding to point in (i.e., they are both projections of the same 3D point ), we formulate a contrastive loss to maximize the mutual information between their descriptors and

. We consider a noise contrastive estimation framework

[40], consisting of the positive pair and negative pairs :


where the similarity is computed as the dot product of feature descriptors scaled by a temperature :


This loss can be interpreted as the log loss of a ()-way softmax classifier that learns to classify as . blackThe points are sampled uniformly from other images in the same batch. To avoid collapsing the feature space, we normalize all feature descriptors to unit length.

Semantic classification loss. For each image we also compute a semantic classification loss. Given unique semantic concepts, we obtain unnormalized score maps from the feature descriptors using a simple conv1x1 layer. That is, we map the

feature descriptor tensor to a

score map tensor, where each slice corresponds to one of the semantic concepts.

Following the design proposed by Araslanov  [7], we add a background channel and compute a pixel-wise softmax to obtain normalized score maps and image-level classification scores , derived from the score maps using the method of Araslanov . Our semantic classification loss is defined as


where is a classification loss on image-level scores and is a self-supervised semantic segmentation loss over pixel-wise predictions (where high-confidence pixel predictions serve as self-supervised labels). For both training and evaluation, we only consider images labeled with a single concept and the one-hot class label is set according to our pseudo image label. We minimize a cross-entropy loss for both image-level and pixel-level predictions.

5.2 Inference

facadeRGB0, 0, 255windowRGB255,128, 0chapelRGB0,153, 0organRGB255, 0, 0 naveRGB101,0,204towerRGB102, 51, 0choirRGB255,51, 255portalRGB255,153, 153altarRGB236, 227, 102 statueRGB57,218,250

Baseline (w/o )


Figure 5: Segmenting an unseen 3D model of the interior of the Aachen Cathedral in Germany. Color legend: navenave, chapelchapel, organorgan, altaraltar, choirchoir, statuestatue, portalportal, facadefacade.

At inference time, we can feed an image from a never-before-seen location into our model (Figure 5

). The model outputs pixel-wise feature descriptors and probability scores over the semantic concepts for each pixel (and also for the full image, if that is desired). We follow the procedure described in

[7] to extract 2D segmentations. To output probability scores for a 3D point in the scene, we process all the images associated with this 3D point. The feature descriptors of all its 2D projections are averaged, and we process this average descriptor to output its associated probability scores. We associate a 3D point with one of the semantic concepts if its corresponding confidence score is greater than .

6 Evaluation

In this section, we demonstrate our ability to learn semantic concepts shared across multiple landmarks. Specifically, we seek to answer the following questions:

  • Is WikiScenes suitable for learning these concepts?

  • How important is the 3D contrastive loss?

  • How well does our model generalize to Internet photos from never-before-seen locations?

We perform a variety of experiments to evaluate performance across multiple tasks, including classification, segmentation, and a caption-based image retrieval task that operates on the raw captions directly. These experiments are complemented with a visual analysis that highlights the unique characteristics and challenges of our data.

6.1 Implementation details

Data. Out of the 99 WikiScenes landmarks, 70 landmarks contain sufficient labeled data that can serve for training and evaluating our models (images are labeled using the approach described in Section 4.2). We create a 9:1 split at the landmarks level, forming a test set for landmarks unseen during training (WS-U). For the 63 landmarks in the training set, we create a 9:1 split at the images level, forming a test set for known landmarks (WS-K) to evaluate how well our model can classify unseen images in familiar locations. Overall, we use almost 9K labeled images for training, with balanced class frequencies across the ten semantic concepts.

Training. We use a batch size of 32, corresponding to 16 image pairs. Only half of these are real pairs with shared keypoints, as we also want to consider labeled images that are not associated with any 3D reconstruction, possibly due of a sparse sampling of views in these regions. Please refer to the supplementary for additional implementation details.

Test Set Model mAP mAP facade window chapel organ nave tower choir portal altar statue
WS-K Baseline (w/o ) 70.8 77.7 87.2 89.2 60.2 89.7 85.8 64.1 61.5 68.0 50.0 52.0
Ours 75.3 81.0 90.0 88.5 68.7 90.7 85.7 61.1 77.2 76.5 54.4 59.9
WS-U Baseline (w/o ) 48.3 64.0 71.0 92.2 10.7 57.3 71.0 53.4 43.6 31.1 25.8 27.1
Ours 52.0 67.3 77.7 93.4 16.5 49.4 77.3 46.1 44.1 35.2 39.9 40.0
Table 1: Classification Performance. We report mean average precision (mAP, indicates averaging over all images, and not per class), and per distilled concept average precision (AP). Results of our model are compared against a model trained without our 3D contrastive loss. Performance is reported on images from known landmarks (WS-K) and unseen landmarks (WS-U). The best results are highlighted in bold.

6.2 Label quality

We assess the accuracy of our pseudo-labels by manually inspecting 50 randomly sampled training images for each concept, and identifying images with incorrect labels (i.e., the image does not picture all or part of the semantic concept). We found an accuracy greater than 98%, suggesting that our pseudo-labels are highly accurate. We found that most errors are due to images that contain schematic diagrams or scans of the concept (and not natural images capturing it). Please refer to the supplementary material for visualizations of our training samples.

6.3 3D-consistency guided classification

Next we evaluate to what extent semantic concepts can be learned across a multitude of landmarks, and the effect of the 3D consistency regularization allowed by our dataset on classification results. We perform an image classification evaluation using our pseudo-labels, which we consider ground-truth for evaluation purposes. We compare our model to a model with the same architecture, trained using the semantic classification loss but without our 3D contrastive loss, hereby denoted as the baseline model—adapted from the one proposed in Araslanov  [7].

For each model, we report the overall mean average precision (mAP), as well as a breakdown of AP per concept, in Table 1. Results are reported for test images from known locations (WS-K) and unseen locations (WS-U). As the table illustrates, our model outperforms the baseline model in most of the concepts and yields significant gains in mAP, boosting overall performance by and , when evaluating on WS-K and WS-U, respectively (and an improvement of 3.3 when averaging across images, which is less affected by class frequencies). We provide additional experiments and an analysis of errors in the supplementary material.

whitess Input


whitess Ours

GT Masks

Figure 6: Segmenting images of unseen landmarks. Pixels are labeled facade, portal, organ, window, tower from left to right.

6.4 2D and 3D segmentation

Our framework learns pixel-wise features that are useful beyond classification, e.g., for producing segmentation maps for 2D images and 3D reconstructions. We show segmentation results for 2D images in Figure 6 and for 3D reconstructions in Figures 1 and 5.

blackWe manually label a random subset of test images (from unseen landmarks) for evaluating 2D segmentation performance and report standard segmentation metrics in Table 2. Specifically, we labeled images spanning six concepts that have definite boundaries (facade, portal, window, organ, tower and statue). The distributions across these classes are roughly uniform (with 24-50 images per class).

black Table 2

shows the average intersection-over-union (IoU), precision and recall on the manually labeled set. These results show that our 3D-contrastive loss boosts performance over all metrics. Precision is significantly higher (

vs. ), with a modest increase in IoU and recall.

Model IoU Precision Recall
Baseline (w/o ) 25.4 68.6 28.4
Ours 27.2 80.8 29.6
Table 2: Image segmentation performance on manually labeled set.

To evaluate 3D segmentation performance, as it is difficult to obtain ground-truth 3D segmentations for large-scale landmarks whose reconstructions span thousands of points, we design two proxy metrics to assess both completeness and accuracy of the 3D results. These metrics are (i) the fraction of ambiguous points , and (ii) the interior-exterior error (both dependent on the confidence scores ).

The fraction of ambiguous points quantifies the extent to which the model associates concepts to 3D points with high confidence. To compute , we measure the fraction of points that are not associated with a concept, averaging over all landmarks. For example, means that for all points, the model’s predictions across all images was consistent in 3D space, and thus the points were successfully associated with concepts, and means that all points are ambiguous in their semantic association.

Baseline 0.50 0.78 0.10 0.09 0.56 0.83 0.13 0.10
Ours 0.43 0.70 0.10 0.06 0.40 0.69 0.11 0.06
Table 3: 3D Segmentation Evaluation. Proxy metrics and are described in detail in Section 6.4. For both metrics, lower is better.

Due to limited visual connectivity, 3D reconstructions of landmarks typically are broken into one or more exterior reconstructions and one or more interior reconstructions. Thus, we devise the interior-exterior error to quantify to what extent concepts that should be exclusively found in either an exterior reconstruction or an interior reconstruction are mixed into a single reconstruction. For example, for the interior 3D reconstruction shown in Figure 5, we do not expect to see points labeled as “facade” or “tower”, since those concepts appear outdoors. Interior concepts include “organ”, “nave”, “altar”, and “choir”, and exterior concepts include “portal”, “facade”, and “tower”. For each 3D reconstruction , the error is defined as


where is the probability of an exterior concept in the 3D reconstruction (normalized over the sum of exterior and interior concepts in the reconstruction). We perform a weighted averaging over all the reconstructions, such that larger 3D reconstructions affect the average accordingly.

We report results for both and in Table 3 (note that all our qualitative results are generated using

). As illustrated in the table, our model surpasses the baseline model (trained without the 3D contrastive loss) on both metrics, demonstrating that more points are consistently associated with concepts, and that each point cloud is more consistently segmented into exterior or interior concepts. Note that some structural parts are inherently more ambiguous (for example, a “statue” is often placed on a “facade”), hence many 3D points are not associated with concepts (also for our model). We explore this further in the supplementary material, showing a confusion matrix for our image classification model as well as the ancestor labels associated with each concept.

Model R1 R5 R10 S1 S5 S10 S1 S5 S10
Pretrained 1.2 4.3 6.6 22.9 51.0 67.2 44.2 73.9 85.8
Baseline 3.2 11.9 19.2 51.9 80.6 88.0 69.2 89.3 94.6
Ours 4.0 13.9 22.5 64.0 81.9 91.2 76.0 91.2 96.3
Table 4: Caption-Based Image Retrieval Performance. We report performance using a standard retrieval metric and our proposed semantic metric ( indicates averaging over all images, and not per class). Results of our model are compared against a model trained without our 3D augmentations (baseline) and on the pretrained model [31]. Performance is reported on images from unseen landmarks (WS-U). The best results are highlighted in bold.

“Statue of Saint Cecilia in the south transept of York Minster.”

“The organ in Exeter Cathedral in Devon.”

“York Minster as seen from across the street, York, England.”

Figure 7: Retrieving images from captions of unseen landmarks. Above we show the top three retrievals next to the target image (left), corresponding to the caption below.

6.5 Learning semantics from raw captions

black To explore the utility of the raw captions without first distilling concepts, we train a joint vision-language model on images and their raw captions and evaluate it on a caption-based image retrieval task. As with other tasks like classification, we explore the benefit of having 3D geometry in this experiment, showing that geometry can be used to perform data augmentation and boost retrieval performance.

We finetune a state-of-the-art multi-task joint visual and textual representation model [31] using the same landmarks-level splits as above, training on landmarks from WS-K and testing on unseen landmarks in WS-U. We compare models finetuned on two different subsets: (1) a baseline subset, provided with pairs of English-only captions and their corresponding images, and (2) a 3D-augmented subset, where, in addition to the real image-caption pairs, we create new image-caption pairs by associating images with captions from other images with a large visual overlap (measured by thresholding on an IoU ratio of 3D keypoints, set empirically to 0.3). Performing such 3D-aware augmentation enables use of additional images—for which a caption may be unavailable—but whose content is similar to the original image (while appearance and viewpoint may vary). Our 3D-augmentation strategy yields a training dataset with roughly 1.5K more images and 9K more image-caption pairs (the original training set contains nearly 20K pairs).

blackTable 4 shows caption-based image retrieval performance using Recall@K (R1,R5,R10 in the table), which is a standard metric that measures the percentage of successful retrievals for which the target image is among the top-K retrievals. Additionally, to quantify how semantically accurate these retrievals are, we use our semantic labels (obtained according to the method described in Section 4.2) as a proxy and propose a semantic measure S that measures the percentage of retrievals containing at least one image labeled correctly. All metrics are reported for the two models and also for the pretrained model [31] (without finetuning). For our semantic metric, we report an average per class and average over all the images in the test set.

blackUsing 3D augmentations gives a boost in performance across all metrics. Figure 7 illustrates several retrieval results from our model. As illustrated in the bottom row, the model can also align general concepts to our images, such as what a cathedral should look like “from across the street”. We show additional qualitative results in the supplementary material.

7 Conclusion

We have presented a new large-scale dataset at the intersection of vision, language, and 3D. We demonstrated the use of our dataset for mining semantic concepts and for learning to associate these concepts with images and 3D models from never-before-seen locations. We show that these tasks benefit from having access to 3D geometry, allowing robust distillation of semantics from noisy Internet collections.

Future applications. We believe our dataset could spark research into many new problems. Automatic captioning of images capturing tourist attractions is one interesting avenue for future research. The rich textual descriptions in our dataset could allow users to virtually explore any tourist attraction, serving as a virtual “tour guide”. Our dataset could also enable automatic generation of new 3D scenes and language-guided scene editing. While text-based 2D image generation is a very active research area [15, 39, 27], the problem of generating and modifying 3D scenes using language is largely unexplored. Finally, our focus was on discovery of well-supported concepts, but our dataset can also benefit zero- or few-shot settings via the detailed descriptions present in image captions, enabling rich conceptualization of general visual concepts.

Acknowledgments. This work was supported by the National Science Foundation (IIS-2008313), by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program, by the Zuckerman STEM leadership program, and by an AWS ML Research Award.


  • [1] Panos Achlioptas, Judy Fan, Robert Hawkins, Noah Goodman, and Leonidas J Guibas. ShapeGlot: Learning language for shape differentiation. In ICCV, 2019.
  • [2] Sameer Agarwal, Yasutaka Furukawa, Noah Snavely, Ian Simon, Brian Curless, Steven M Seitz, and Richard Szeliski. Building Rome in a day. Communications of the ACM, 54(10), 2011.
  • [3] Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. Analyzing the behavior of visual question answering models. arXiv preprint arXiv:1606.07356, 2016.
  • [4] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR, 2018.
  • [5] Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In CVPR, 2018.
  • [6] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In ICCV, 2015.
  • [7] Nikita Araslanov and Stefan Roth. Single-stage semantic segmentation from image labels. In CVPR, 2020.
  • [8] Valts Blukis, Nataly Brukhim, Andrew Bennett, Ross A. Knepper, and Yoav Artzi.

    Following high-level navigation instructions on a simulated quadcopter with imitation learning.

    In Proceedings of the Robotics: Science and Systems Conference, 2018.
  • [9] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. ShapeNet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
  • [10] Dave Zhenyu Chen, Angel X Chang, and Matthias Nießner. Scanrefer: 3d object localization in rgb-d scans using natural language. arXiv preprint arXiv:1912.08830, 2019.
  • [11] Kevin Chen, Christopher B Choy, Manolis Savva, Angel X Chang, Thomas Funkhouser, and Silvio Savarese. Text2shape: Generating shapes from natural language by learning joint embeddings. In ACCV, 2018.
  • [12] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. PAMI, 40(4):834–848, 2018.
  • [13] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In

    Proc. Int. Conf. on Machine Learning

    . PMLR, 2020.
  • [14] David J Crandall, Lars Backstrom, Daniel Huttenlocher, and Jon Kleinberg. Mapping the world’s photos. In Proc. Int. Conf. on World Wide Web, 2009.
  • [15] Hao Dong, Simiao Yu, Chao Wu, and Yike Guo. Semantic image synthesis via adversarial learning. In ICCV, 2017.
  • [16] Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. In

    Proceedings of the Conference on Empirical Methods in Natural Language Processing

    , pages 457–468, 2016.
  • [17] Yasutaka Furukawa, Brian Curless, Steven M Seitz, and Richard Szeliski. Towards internet-scale multi-view stereo. In CVPR, 2010.
  • [18] Stephan Gammeter, Lukas Bossard, Till Quack, and Luc Van Gool. I know what you did last summer: object-level auto-annotation of holiday snaps. In ICCV, pages 614–621, 2009.
  • [19] Michael Goesele, Noah Snavely, Brian Curless, Hugues Hoppe, and Steven M Seitz. Multi-view stereo for community photo collections. In ICCV, 2007.
  • [20] Tanmay Gupta, Arash Vahdat, Gal Chechik, Xiaodong Yang, Jan Kautz, and Derek Hoiem. Contrastive learning for weakly supervised phrase grounding. arXiv preprint arXiv:2006.09920, 2020.
  • [21] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, 2020.
  • [22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [23] Richang Hong, Daqing Liu, Xiaoyu Mo, Xiangnan He, and Hanwang Zhang. Learning to compose and reason with language tree structures for visual grounding. PAMI, 2019.
  • [24] Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. Learning to reason: End-to-end module networks for visual question answering. In ICCV, pages 804–813, 2017.
  • [25] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
  • [26] Noriyuki Kojima, Hadar Averbuch-Elor, Alexander M Rush, and Yoav Artzi. What is learned in visually grounded neural syntax acquisition. In ACL, 2020.
  • [27] Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, and Philip Torr. Controllable text-to-image generation. In NeurIPS, 2019.
  • [28] Zhengqi Li and Noah Snavely. Megadepth: Learning single-view depth prediction from internet photos. In CVPR, 2018.
  • [29] Zhengqi Li, Wenqi Xian, Abe Davis, and Noah Snavely. Crowdsampling the plenoptic function. In ECCV, 2020.
  • [30] David G. Lowe. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis., 60(2):91–110, 2004.
  • [31] Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 12-in-1: Multi-task vision and language representation learning. In CVPR, 2020.
  • [32] Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In CVPR, 2017.
  • [33] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. Generation and comprehension of unambiguous object descriptions. In CVPR, pages 11–20, 2016.
  • [34] Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, and Daniel Duckworth. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In CVPR, 2021.
  • [35] Moustafa Meshry, Dan B Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, and Ricardo Martin-Brualla. Neural rerendering in the wild. In CVPR, pages 6878–6887, 2019.
  • [36] Dipendra Misra, John Langford, and Yoav Artzi.

    Mapping instructions and visual observations to actions with reinforcement learning.

    In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1004–1015, 2017.
  • [37] Kaichun Mo, Shilin Zhu, Angel X Chang, Li Yi, Subarna Tripathi, Leonidas J Guibas, and Hao Su. Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding. In CVPR, 2019.
  • [38] Shuyo Nakatani. Language detection library for java, 2010.
  • [39] Seonghyeon Nam, Yunji Kim, and Seon Joo Kim.

    Text-adaptive generative adversarial networks: manipulating images with natural language.

    In NeurIPS, 2018.
  • [40] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
  • [41] Taesung Park, Alexei A Efros, Richard Zhang, and Jun-Yan Zhu.

    Contrastive learning for unpaired image-to-image translation.

    In ECCV. Springer, 2020.
  • [42] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer.

    Automatic differentiation in pytorch.

  • [43] James Philbin and Andrew Zisserman. Object mining using a matching graph on very large image collections. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pages 738–745. IEEE, 2008.
  • [44] Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D Manning. Stanza: A python natural language processing toolkit for many human languages. arXiv preprint arXiv:2003.07082, 2020.
  • [45] Till Quack, Bastian Leibe, and Luc Van Gool. World-scale mining of objects and events from community photo collections. In Proceedings of the 2008 international conference on Content-based image and video retrieval, pages 47–56, 2008.
  • [46] Bryan C Russell, Ricardo Martin-Brualla, Daniel J Butler, Steven M Seitz, and Luke Zettlemoyer. 3D Wikipedia: Using online text to automatically label and navigate reconstructed geometry. In SIGGRAPH, 2013.
  • [47] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In CVPR, pages 4510–4520, 2018.
  • [48] Johannes L Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In CVPR, 2016.
  • [49] Johannes Lutz Schönberger, True Price, Torsten Sattler, Jan-Michael Frahm, and Marc Pollefeys. A vote-and-verify strategy for fast spatial verification in image retrieval. In Asian Conference on Computer Vision (ACCV), 2016.
  • [50] Ian Simon and Steven M. Seitz. Scene segmentation using the wisdom of crowds. In ECCV, pages 541–553, 2008.
  • [51] Ian Simon, Noah Snavely, and Steven M. Seitz. Scene summarization for online image collections. In ICCV, 2007.
  • [52] Noah Snavely, Steven M Seitz, and Richard Szeliski. Photo tourism: Exploring photo collections in 3D. In SIGGRAPH, 2006.
  • [53] Dídac Surís, Dave Epstein, and Carl Vondrick. Globetrotter: Unsupervised multilingual translation from visual alignment. arXiv preprint arXiv:2012.04631, 2020.
  • [54] Mingzhe Wang, Mahmoud Azab, Noriyuki Kojima, Rada Mihalcea, and Jia Deng. Structured matching for phrase localization. In ECCV, pages 696–711, 2016.
  • [55] Tobias Weyand and Bastian Leibe. Discovering details and scene structure with hierarchical iconoid shift. In ICCV, 2013.
  • [56] Fanyi Xiao, Leonid Sigal, and Yong Jae Lee. Weakly-supervised visual grounding of phrases with linguistic structures. In CVPR, 2017.
  • [57] Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. Image captioning with semantic attention. In CVPR, 2016.
  • [58] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling context in referring expressions. In ECCV, pages 69–85, 2016.
  • [59] Ye Yu, Abhimitra Meka, Mohamed Elgharib, Hans-Peter Seidel, Christian Theobalt, and William A. P. Smith. Self-supervised outdoor scene relighting. In ECCV, 2020.
  • [60] Ye Yu and William A. P. Smith. InverseRenderNet: Learning single image inverse rendering. In CVPR, pages 3155–3164, 2019.

Appendix A Dataset Visualizations and Details

We show captions with spatial connectors and their corresponding images in Figure 8 to illustrate the richness of part interactions contained within our dataset.

Data distributions. Figure 9 shows the distribution of captions by the number of words. Figure 10 shows the number of data samples by landmark identity sorted by size. Figure 11 shows the number of captions in the top 10 languages. The caption’s language is detected according to [38].

(a) (b) (c) (d) (e) (f)
(g) (h) (i) (j) (k) (l)
Figure 8: Example images from WikiScenes. Corresponding captions are: (a) Altar behind the main quire at Southwark Cathedral. (b) Bishop Gregorio Modrego over his tomb in cathedral of Barcelona, by Fredric Marès. (c) Went to the top of the bell tower to see the views looking over the city of Vienna. (d) The choir of Christ Church Cathedral in Dublin, Ireland, looking east towards the sanctuary. (e) Statues above the main entrance of Canterbury Cathedral: (left to right) Augustine of Canterbury, Lanfranc, Anselm of Canterbury and Thomas Cranmer. (f) The nave of Exeter Cathedral From the west end of the nave looking towards the crossing with its 17th century organ. (g) Amiens, France: Fassade detail of the Cathedrale of Amiens, showing the right group of sculptures under the rosette window. (h) Salisbury Cathedral Looking towards the West Front, from the Quire. (i) The Silbermann organ in Strasbourg cathedral, view from below with the nave windows. (j) The Dome of St Paul’s Cathedral viewed from the river bank below the Millennium Bridge. (k) Sandstone pulpit next to the north transept of Liverpool Anglican Cathedral. (l) Window with medieval glass painting behind the high altar in St. Stephen’s Cathedral, Vienna.

whitexxxxxxNumber of captions

Number of words

Figure 9: Distribution of image captions by number of words (-axis is plotted on a log scale).

whitexxxxNumber of data samples

Figure 10: Number of images paired with textual descriptions by landmarks (sorted). The -axis is plotted on a log scale.

whitexxxxxxxxxxxNumber of captions

Figure 11: Number of captions in the top 10 languages. “Unknown” denotes captions that are not recognizable, such as date, URL or null strings.


Appendix B Implementation Details

b.1 Dataset construction

We use COLMAP [48] version 3.6 for building 3D reconstructions. The SIFT [30] peak threshold is set to 0.03. To find image matches, we use vocabulary tree matching [49] using the pretrained vocabulary tree with 1M visual words. For landmarks that have reconstructions in the MegaDepth dataset [28] (we have 44 shared landmarks), external images from their dataset were added to assist reconstruction. Original high resolution images are used for reconstruction. However, for training purposes, we use resized images with the shorter dimension set to 200 pixels. We also release a higher resolution version in our dataset, where the longer dimension is set to 1200 pixels.

b.2 Network architecture

Figure 12 shows the structure of our network. The structure closely follows the network proposed in Araslanov  [7]

. For completeness, we briefly summarize the network architecture here. We use a Resnet-50 backbone to extract both low-level and high-level features (which is pretrained on ImageNet). Atrous Spatial Pyramid Pooling (ASPP) 

[12] augments the ResNet features by gathering information at different scales. A Global Cue Injection (GCI) module [7] infuses global cues from deep layers into low-level features derived from the shallow layers of ResNet. The stochastic gate [7] aims to mitigate overfitting introduced by errors in the pseudo-ground truth used during training. The 3D consistency loss is computed on the features before unnormalized score maps are computed. The classification score is computed according to  [7], summing a normalized Global Weighted Pooling (nGWP) term and a focal penalty term (Equation 3 and Equation 5 in their paper).

Figure 12: Our classification network architecture.

b.3 Training details

Our models are implemented in PyTorch [42]. We train our model using the Adam optimizer  [25] with weight decay

, and using default Adam parameters. The model is trained for 25 epochs with learning rate decay occurring at the

and epoch. Following  [7], we pretrain the model without for 5 epochs. In all our experiments, learning rate decay is performed using a factor of 0.1. For all experiments with

, the balancing coefficient is set to 0.3 (i.e., the 3D constrastive loss is multiplied by this coefficient). The temperature used in the 3D constrastive loss is set to the default value of 0.07 and the number of negatives is 16. All models are pretrained on ImageNet.

As the images in WikiScenes are of varying resolution, we perform a random resized crop operation to convert each image to training samples. The scale factor of the random resized crop is sampled from the range . Random horizontal flipping and color jittering are also performed to augment the data. The brightness, contrast, saturation and hue parameters are set to , respectively, in the color jittering step. We balance the size of the different classes by resampling. The balanced dataset contains roughly 900 images in each class.

b.4 Additional 2D segmentation details

Our model predicts both image-level classification scores (i.e., , see Figure 12) and pixel-wise normalized segmentation score maps (i.e., , see Figure 12) that also include a background score, in addition to scores for each of the semantic concepts. Following [7], the background score is weakened by a power function (set empirically to the power in our experiments). We first take the maximal value in to select the image-level label (we only consider images that contain a single label). Then, the 2D segmentation mask is comprised of all pixels whose score corresponding to the selected image-level concept surpasses that of the (weakened) background score.

b.5 Additional 3D segmentation details

For each point in a 3D model, we first gather classification scores from its 2D projection in different views. The scores are averaged before applying softmax function to obtain the classification score of the 3D point. Points with scores higher than a predetermined threshold are considered foreground, and the remaining points are considered ambiguous and are therefore not rendered in our 3D visualizations. For quantitative evaluation, we provide results for and . Our visualizations are rendered using a threshold of .

b.6 Caption-based image retrieval experiment

To perform the caption-based retrieval experiment, we run the command for fine-tuning from the multi-task trained model666Available here: We define two new tasks, one for the baseline model (that uses only original image-caption pairs) and another for the 3D-augmented model (that also uses 3D-augmented pairs). Both models are trained for 12 epochs, using their (unmodified) configurations.

Regarding evaluation, to compute our proxy semantic measure S, we followed their retrieval evaluation and construct a batch of images from validation, however, in our case, this batch includes all labeled images from unseen images ( images in total) and additional randomly-selected unlabeled images. We use these labels for evaluating whether or not the label of the retrieved images agree with the label of the target image.

Test Set Model mAP facade window chapel organ nave tower choir portal altar statue
WS-K Baseline (w/o 3D loss) 70.8 87.2 89.2 60.2 89.7 85.8 64.1 61.5 68.0 50.0 52.0
w/ 71.4 86.4 88.3 53.1 89.4 86.1 65.7 62.0 69.7 52.5 60.3
w/ 72.1 88.5 90.5 55.6 86.0 86.4 66.5 65.0 68.4 50.2 63.4
w/ 73.3 90.4 87.1 62.9 90.3 85.8 62.1 75.9 68.4 52.8 57.1
w/ 75.3 90.0 88.5 68.7 90.7 85.7 61.1 77.2 76.5 54.4 59.9
WS-U Baseline (w/o 3D loss) 48.3 71.0 92.2 10.7 57.3 71.0 53.4 43.6 31.1 25.8 27.1
w/ 49.5 70.6 94.3 10.9 61.8 73.7 50.8 40.9 41.3 21.6 28.9
w/ 49.9 73.1 94.9 9.9 53.7 74.7 47.5 40.8 29.1 39.4 35.6
w/ 52.5 75.8 94.1 16.7 62.5 75.4 50.4 44.5 43.0 24.4 38.4
w/ 52.0 77.7 93.4 16.5 49.4 77.3 46.1 44.1 35.2 39.9 40.0
Table 5: Classification performance using different types of 3D-consistency regularizations. We report mean average precision (mAP), and per distilled–concept average precision (AP). Please refer to Section C for more details on the different configurations. Performance is reported on images from two different tests sets corresponding to known landmarks (WS-K) and unseen landmarks (WS-U). The best result for each test set and column are highlighted in bold.

Appendix C Ablation Study

We perform an ablation study to analyze our design choices for the 3D-consistency regularization. We replace our 3D contrastive loss with the following alternatives:

3D MSE loss. We compute a simple MSE loss between the features of corresponding pixels:

3D Triplet loss. We select one negative pixel and compute the following 3D loss:

where is a margin value (set empirically to ).

3D intra-image contrastive loss. In the main paper, we introduce a 3D contrastive loss , where the negative pixels are sampled from other images in the batch. We change the sampling strategy such that all the negative pixels are selected from other regions in the same image to obtain . Specifically, the points are sampled uniformly in , outside a box of size around .

Results are reported in Table 5. As illustrated in the table, we can improve classification performance using a variety of loss configurations. Our 3D contrastive loss, using both inter-image and intra-image sampling strategies, yield the most significant improvements.

Following prior work [7], our semantic classification loss is composed of two terms, where is a self-supervised loss over pixelwise predictions that is applied starting at the 6-th epoch. Classification performance is roughly the same when this self-supervised loss is not used. Specifically, mAP increases from 52.0 to 53.8 for WS-U and decreases from 75.3 to 73.4 for WS-K. The gaps to the baseline model mostly remain unchanged (3.3% improvement for WS-U and 1.8% improvement for WS-K).

Figure 13: Confusion matrix of our classification model on unseen landmarks (the WS-U test set). Ground-truth concept labels correspond to rows, and predicted concept labels to columns. Each row is normalized such that a cell indicates the probability of a classification given the ground-truth label.

Appendix D Additional Classification Results

Figure 13 shows a confusion matrix for our image classification model. We observe that many of the mistakes are understandable, given the hierarchical nature of our data. For example, both “tower” and “portal” are part of a “facade”, and an “altar” is often placed inside a “chapel”.

whitesssssssssssssssss Associated Labels

Associated Anscestor Labels

Figure 14: Associating images with labels and anscestor labels. Above we visualize (in log scale) the co-occurrence of concepts as labels and ancestor labels.

To further explore the hierarchical structure of semantics in our dataset, we associate images with ancestor labels by considering the concepts present in its hierarchy of WikiCategories. Unlike prior works that require manually annotating such hierarchical labels (e.g., [37]), we obtain these automatically, leveraging the hierarchical structure of Wikimedia Commons. In Figure 14, we visualize these hierarchical relationships. Many of these relationships can also be observed from the confusion matrix of our model in Figure 13. We also observe additional intuitive connections such as an image associated with “window” also being associated with larger structures such as “facade” and “nave”; a “statue” can be placed on various structures, and so on.

Resnet-50 [22] MobileNetV2 [47]
Test Set w/o w/ w/o w/
WS-K 68.5 73.9 77.1 79.6
WS-U 48.7 52.3 50.2 53.4
Table 6: Evaluating the effectiveness of our 3D contrastive loss on off-the-shelf classification models. For each model, we report mAP. The best results are highlighted in bold.

Finally, to further validate the effectiveness of our 3D loss, we take off-the-shelf networks dedicated for classification and repeat the experiment of testing classification performance with and without our 3D contrastive loss. For this experiment, all models are trained for 10 epochs with a learning rate decay at the epoch. Both Resnet-50 and MobileNetV2 are pretrained on ImageNet.

Results are reported in Table 6. As illustrated in the table, our 3D contrastive loss consistently boosts classification performance, even for off-the-shelf models.

Appendix E Additional Qualitative Results

whitesss Input

whitess w/o

whitesss Ours

Figure 15: Visualizing distances in feature space for unseen landmarks. For each image pair, we select a random pixel in the left image (marked in white) and visualize the distance to all other pixels from the selected pixel (marked in white) with and without our 3D contrastive loss. Warmer colors correspond to smaller distances. As illustrated above, distances in feature space are more semantically meaningful on the model trained with the 3D contrastive loss (see, for instance, distances on the windows in the left pair). Our model is also more robust against large motion and appearance variations between the images (as illustrated on the right).

“Neogothic portal of Our Lady’s Cathedral, Antwerp, by Jean Baptist van Wint (1829-1906). The Cathedral of Our Lady is a Roman Catholic parish church in Antwerp, Belgium.”

“York Minister across the roof-tops of York, UK.”

“York city walls pathway Looking towards Lendal Bridge and the Minster beyond.”

“York Minster at night (2012)”

Figure 16: Retrieving images from captions of The Cathedral of Our Lady and York Minister (landmarks not seen during training). Above we show the top three retrievals next to the reference image (left with black border) that corresponds to the query caption beneath. Note that this query image is not seen by the network—just the caption—and so we only show this image for reference. In the bottom row, we demonstrate that our model is less sensitive to appearance-based descriptions—in this case, the retrieved images are not captured “at night”. This can be attributed to our 3D augmentations, which are unaware of appearance changes (thus allowing to focus on part-based scene semantics instead).


whitesss “portal”


whitesss “choir”




whitess “chapel”


whitesss “nave”

Figure 17: 2D Segmentations on correctly classified unseen images, segmented as “portal”, “choir”, “tower”, “chapel” and “nave”.




whitesss “altar”






whitess “window”

Figure 18: 2D Segmentations on correctly classified unseen images. Hightlighted pixels are segmented as “facade”, “altar”, “organ”, “statue” and “window”.

facadeRGB0, 0, 255windowRGB255,104, 0chapelRGB0,153, 0organRGB255, 0, 0 naveRGB101,0,204towerRGB102, 51, 0choirRGB255,51, 255portalRGB255,153, 153altarRGB236, 227, 102 statueRGB57,218,250

Notre-Dame de Strasbourg (exterior)

Cathedral of Barcelona (exterior)

Notre-Dame de Reims (exterior)

Saint Isaac’s Cathedral (exterior)

St. Stephen’s Cathedral (exterior)

Amiens Cathedral (exterior)

Santiago de Compostela Cathedral (exterior)

St Paul’s Cathedral (exterior)

Notre-Dame de Paris (exterior 1)

Notre-Dame de Paris (exterior 2)

Ulm Minster (exterior)

Duomo di Milano (interior)

Notre-Dame de Reims (interior)

Cathédrale Saint-André de Bordeaux (interior)

Saint Isaac’s Cathedral (interior)

St. Stephen’s Cathedral (interior)

Amiens Cathedral (interior)

Metz Cathedral (interior)

Notre-Dame de Paris (interior)

León Cathedral (interior)

Figure 19: Segmenting 3D reconstructions. Above we show segmentation results for landmarks seen during training. 3D points not associated with concepts are colored in gray. Color legend of segmented points: navenave, chapelchapel, organorgan, altaraltar, choirchoir, statuestatue, portalportal, facadefacade, towertower, windowwindow.

We show additional image segmentation results on test images from the WS-K test set in Figure 17 and Figure 18. As illustrated in the figures, the model is more successful with segmenting certain concepts, such as “tower”, “portal” or “window”. Some concepts, such as “chapel” yield noisier segmentation results. We show 3D segmentation results for landmarks in WS-K in Figure 19.

We visualize the learned features for two image pairs in Figure 15. As the figure illustrates, distances in feature space are more semantically meaningful on the model trained with the 3D contrastive loss. For example, only pixels on the windows yield small distances using our model (left image pair). Our model is also more robust against large motion and appearance variations between the images.

We show additional caption-based image retrieval results in Figure 16, mostly for images not-labeled with one of the semantic concepts we compute according to the method described in the main paper. As demonstrated in the figure, the model can also align more generic semantic concepts to our images. However, as we perform 3D-augmentations, the model is less aware of appearance-based differences. For example, see the bottom row in the figure, where the retrieved images are not captured “at night”.