Places: An Image Database for Deep Scene Understanding

10/06/2016 ∙ by Bolei Zhou, et al. ∙ 0

The rise of multi-million-item dataset initiatives has enabled data-hungry machine learning algorithms to reach near-human semantic classification at tasks such as object and scene recognition. Here we describe the Places Database, a repository of 10 million scene photographs, labeled with scene semantic categories and attributes, comprising a quasi-exhaustive list of the types of environments encountered in the world. Using state of the art Convolutional Neural Networks, we provide impressive baseline performances at scene classification. With its high-coverage and high-diversity of exemplars, the Places Database offers an ecosystem to guide future progress on currently intractable visual recognition problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 11

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

What does it take to reach human-level performance with a machine-learning algorithm? In the case of supervised learning, the problem is two-fold. First, the algorithm must be suitable for the task, such as pattern classification in the case of object recognition

[1, 2], pattern localization for object detection [3]

or the necessity of temporal connections between different memory units for natural language processing

[4, 5]. Second, it must have access to a training dataset of appropriate coverage (quasi-exhaustive representation of classes and variety of examplars) and density (enough samples to cover the diversity of each class). The optimal space for these datasets is often task-dependent, but the rise of multi-million-item sets has enabled unprecedented performance in many domains of artificial intelligence.

The successes of Deep Blue in chess, Watson in “Jeopardy!”, and AlphaGo in Go against their expert human opponents may thus be seen as not just advances in algorithms, but the increasing availability of very large datasets: 700,000, 8.6 million, and 30 million items, respectively [6, 7, 8]. Convolutional Neural Networks [1, 9] have likewise achieved near human-level visual recognition, trained on 1.2 million object [10, 11, 12] and 2.5 million scene images [2]. Expansive coverage of the space of classes and samples allows getting closer to the right ecosystem of data that a natural system, like a human, would experience.

Here we describe the Places Database, a quasi-exhaustive repository of 10 million scene photographs, labeled with 476 scene semantic categories and attributes, comprising the types of visual environments encountered in the world. Image samples are shown in Fig. 1. In the context of Places, we explain the steps to create high-quality datasets enabling the remarkable feats of machine-learning algorithms.

Fig. 1: Image samples from various categories of the Places Database. The dataset contains three macro-classes: Indoor, Nature, and Urban.
Fig. 2: Image samples from four scene categories grouped by queries to illustrate the diversity of the dataset. For each query we show 9 annotated images.

2 Places Database

2.1 Coverage of the categorical space

The primary asset of a high-quality dataset is an expansive coverage of the categorical space we want to learn. The strategy of Places is to provide an exhaustive list of the categories of environments encountered in the world, bounded by spaces where a human body would fit (e.g. closet, shower). The SUN (Scene UNderstanding) dataset [13] provided that initial list of semantic categories. The SUN dataset was built around a quasi-exhaustive list of scene categories with different functionalities, namely categories with unique identities in discourse. Through the use of WordNet [14], the SUN database team selected 70,000 words and concrete terms that described scenes, places and environments that can be used to complete the phrase “I am in a place”, or “let’s go to the/a place”. Most of the words referred to basic and entry-level names ([15]), resulting in a corpus of 900 different scene categories after bundling together synonyms, and separating classes described by the same word but referring to different environments (e.g. inside and outside views of churches). Details about the building of that initial corpus can be found in [13]. Places Database has inherited the same list of scene categories from the SUN dataset.

2.2 Construction of the database

2.2.1 Step 1: Downloading images using scene category and adjectives

From online image search engines (Google Images, Bing Images, and Flickr), candidate images were downloaded using a query word from the list of scene classes provided by the SUN database [13]. In order to increase the diversity of visual appearances in the Places dataset (see Fig. 2), each scene class query was combined with 696 common English adjectives (e.g., messy, spare, sunny, desolate, etc.). About 60 million images (color images of at least 200200 pixels size) with unique URLs were identified. Importantly, the Places and SUN datasets are complementary: PCA-based duplicate removal was conducted within each scene category in both databases so that they do not contain the same images.

2.2.2 Step 2: Labeling images with ground truth category

Fig. 3: Annotation interface in the Amazon Mechanical Turk for selecting the correct exemplars of the scene from the downloaded images. The left plot shows the instruction given to the workers in which we define positive and negative examples. The right plot shows the binary selection interface.

Image ground truth label verification was done by crowdsourcing the task to Amazon Mechanical Turk (AMT). Fig.3 illustrates the experimental paradigm used: AMT workers were each given instructions relating to a particular image category at a time (e.g. cliff), with a definition and samples of true and false images. Workers then performed a go/no-go categorical task (Fig.3). The experimental interface displayed a central image, flanked by smaller version of images the worker had just responded to, on the left, and will respond to next, on the right. Information gleaned from construction of the SUN dataset suggests that the first iteration of labeling will show that more than 50% of the the downloaded images are not true exemplars of the category. As illustrated in Fig.3, the default answer is set to No (see images with bold red contours), so the worker can more easily press the space bar to move the majority of No images forward. Whenever a true category exemplar appears in the center, the worker can press a specific key to mark it as a positive exemplar (responding yes to the question: “is this a place term

”). Reaction time from the moment the image is centrally placed to the space bar or key press is recorded. The interface also allows moving backwards to revise previous annotations. Each AMT HIT (Human Intelligence Task, one assignment for one worker), consisted of 750 images for manual annotation. A control set of 30 positive samples and 30 negative samples with ground-truth category labels from the SUN database were intermixed in the HIT as well. Only worker HITs with an accuracy of 90% or higher on these control images were kept.

The positive images resulting from the first cleaning iteration were sent for a second iteration of cleaning. We used the same task interface but with the default answer set to Yes. In this second iteration, 25.4% of the images were relabeled as No. We tested a third iteration on a few exemplars but did not pursue it further as the percentage of images relabeled as No was not significant.

After the two iterations of annotation, we collected one scene label for 7,076,580 images pertaining to 476 scene categories. As expected, the number of images per scene category vary greatly (i.e. there are many more images of bedroom than cave on the web). There were 413 scene categories that ended up with at least 1000 exemplars, and 98 scene categories with more than 20,000 exemplars.

2.2.3 Step 3: Scaling up the dataset using a classifier

Fig. 4: Annotation interface in Amazon Mechanical Turk for differentiating images from two similar categories. The left plot shows the instruction in which we give several typical examples in each category. The right plot shows the binary selection interface, in which the worker needs to select the shown image into either of the class or none.

As a result of the previous round of image annotation, there were 53 million remaining downloaded images not assigned to any of the 476 scene categories (e.g. a bedroom picture could have been downloaded when querying images for living-room

category, but marked as negative by the AMT worker). Therefore, a third annotation task was designed to re-classify then re-annotate those images, using a semi-automatic bootstrapping approach.

A deep learning-based scene classifier, AlexNet [1], was trained to classify the remaining 53 million images: We first randomly selected 1,000 images per scene category as training set and 50 images as validation set (for the 413 categories which had more than 1000 samples). AlexNet achieved 32% scene classification accuracy on the validation set after training and was then used to classify the 53 million images. We used the predicted class score by the AlexNet to rank the images within one scene category as follow: for a given category with too few exemplars, the top ranked images with predicted class confidence higher than 0.8 were sent to AMT for a third round of manual annotation using the same interface shown in Fig.3. The default answer was set to No.

After completing the third round of AMT annotation, the distribution of the number of images per category flattened out: 401 scene categories had more than 5,000 images per category and 240 scene categories had more than 20,000 images. Totally there are about 3 million images added into the dataset.

2.2.4 Step 4: Improving the separation of similar classes

Despite the initial effort to bundle synonyms from WordNet, the scene list from the SUN database still contained categories with very close synonyms (e.g. ‘ski lodge’ and ‘ski resort’, or ‘garbage dump’ and ‘landfill’). We identified 46 synonym pairs like these and merged their images into a single category.

Fig. 5: Boundaries between place categories can be blurry, as some images can be made of a mixture of different components. The images shown in this figure show a soft transition between a field and a forest. Although the extreme images can be easily classified as field and forest scenes, the middle images can be ambiguous.

Additionally, some scene categories are easily confused with blurry categorical boundaries, as illustrated in Fig. 5. This means that answering the question “Does image I belong to class A?” might be difficult. It is easier to answer the question “Does image I belong to class A or B?” In that case, the decision boundary becomes clearer for a human observer and it also gets closer to the final task that a computer system will be trained to solve.

Indeed, in the previous three steps of the AMT annotation, it became apparent that workers were confused with some pairs of scene categories, for instance, putting images of ‘canyon’ and ‘butte’ into ‘mountain’, or putting ‘jacuzzi’ into ‘swimming pool indoor’, mixing images of ‘pond’ and ’lake’, ‘volcano’ and ‘mountain’, ‘runway’ and ‘landing deck’, ‘highway and road’, ‘operating room’ and ‘hospital room’, etc. In the whole set of categories, we identified 53 such ambiguous pairs.

To further differentiate the images from the categories with shared content, we designed a new interface (Fig. 4) for a fourth step of annotation. We combined exemplar images from the two categories with shared content (such as art school and art studio), and asked the AMT workers to classify images into either of the categories or neither of them.

After the four steps of annotations, the Places database was finalized with over 10 millions labeled exemplars (10,624,928 images) from 434 place categories.

2.3 Scene-Centric Datasets

Scene-centric datasets correspond to images labeled with a scene, or place name, as opposed to an object name. Fig. 6

illustrates the differences among the number of images found in Places, ImageNet and SUN for a set of scene categories common to all three datasets. Places Database is the largest scene-centric image dataset so far.

Fig. 6: Comparison of the number of images per scene category for the common 88 scene categories in Places, ImageNet, and SUN datasets.

2.3.1 Defining the Benchmarks of the Places

Here we describe four subsets of Places as benchmarks. Places205 and Places88 are from [2]. Two new benchmarks were added: from the 434 categories, we selected 365 categories with more than 4000 images each to create Places365-Standard and Places365-Challenge.

Places365-Standard has 1,803,460 training images with the image number per class varying from 3,068 to 5,000. The validation set has 50 images per class and the test set has 900 images per class. Note that the experiments in this paper are reported on Places365-Standard.

Places365-Challenge contains the same categories as Places365-Standard, but the training set is significantly larger with a total of  8 million training images. The validation set and testing set are the same as the Places365-Standard. This subset was released for the Places Challenge 2016111http://places2.csail.mit.edu/challenge.html

held in conjunction with the European Conference on Computer Vision (ECCV) 2016, as part of the ILSVRC Challenge.

Places205. Places205, described in [2], has 2.5 million images from 205 scene categories. The image number per class varies from 5,000 to 15,000. The training set has 2,448,873 total images, with 100 images per category for the validation set and 200 images per category for the test set.

Places88. Places88 contains the 88 common scene categories among the ImageNet [12], SUN [13] and Places205 databases. Note that Places88 contains only the images obtained in round 2 of annotations, from the first version of Places used in [2]. We call the corresponding subsets Places88, ImageNet88 and SUN88. These subsets are used to compare performances across different scene-centric databases, as the three datasets contain different exemplars per category. Note that finding correspondences between the classes defined in ImageNet and Places brings some challenges. ImageNet follows the WordNet definitions, but some WordNet definitions are not always appropriate for describing places. For instance, the class ’elevator’ in ImageNet refers to an object. In Places, ’elevator’ takes different meanings depending on the location of the observer: elevator door, elevator interior, or elevator lobby. Many categories in ImageNet do not differentiate between indoor and outdoor (e.g., ice-skating rink) while in Places, indoor and outdoor versions are separated as they do not necessarily afford the same function.

2.3.2 Dataset Diversity

Given the types of images found on the internet, some categories will be more biased than others in terms of viewpoints, types of objects, or even image style  [16]. However, bias can be compensated with a high diversity of images (with many appearances represented in the dataset). In the next section, we describe a measure of dataset diversity to compare how diverse images from three scene-centric datasets (Places88, SUN88 and ImageNet88) are.

Comparing datasets is an open problem. Even datasets covering the same visual classes have notable differences providing different generalization performances when used to train a classifier  [16]. Beyond the number of images and categories, there are aspects that are important but difficult to quantify, like the variability in camera poses, in decoration styles or in the type of objects that appear in the scene.

Although the quality of a database is often task dependent, it is reasonable to assume that a good database should be dense (with a high degree of data concentration), and diverse (it should include a high variability of appearances and viewpoints). Imagine, for instance, a dataset composed of 100,000 images all taken within the same bedroom. This dataset would have a very high density but a very low diversity as all the images will look very similar. An ideal dataset, expected to generalize well, should have high diversity

as well. While one can achieve high density by collecting a large number of images, diversity is not an obvious quantity to estimate in image sets, as it assumes some notion of similarity between images. One way to estimate similarity is to ask the question

are these two images similar? However, similarity in the wild is a subjective and loose concept, as two images can be viewed as similar if they contain similar objects, and/or have similar spatial configurations, and/or have similar decoration styles and so on. A way to circumvent this problem is to define relative measures of similarity for comparing datasets.

Several measures of diversity have been proposed, particularly in biology for characterizing the richness of an ecosystem (see [17] for a review). Here, we propose to use a measure inspired by the Simpson index of diversity [18]

. The Simpson index measures the probability that two random individuals from an ecosystem belong to the same species. It is a measure of how well distributed the individuals across different species are in an ecosystem, and it is related to the entropy of the distribution. Extending this measure for evaluating the diversity of images within a category is non-trivial if there are no annotations of sub-categories. For this reason, we propose to measure the relative diversity of image datasets A and B based on the following idea: if set A is more diverse than set B, then two random images from set B are more likely to be visually similar than two random samples from A. Then, the diversity of A with respect to B can be defined as

, where and are randomly selected. With this definition of relative diversity we have that A is more diverse than B if, and only if, . For an arbitrary number of datasets, :

(1)

where are randomly selected.

We measured the relative diversities between SUN, ImageNet and Places using AMT. Workers were presented with different pairs of images and they had to select the pair that contained the most similar images. The pairs were randomly sampled from each database. Each trial was composed of 4 pairs from each database, giving a total of 12 pairs to choose from. We used 4 pairs per database to increase the chances of finding a similar pair and avoiding users having to skip trials. AMT workers had to select the most similar pair on each trial. We ran 40 trials per category and two observers per trial, for the 88 categories in common between ImageNet, SUN and Places databases. Fig. 7.a-b shows some examples of pairs from the diversity experiments for the scene categories playground (a) and bedroom (b). In the figure only one pair from each database is shown. We observed that different annotators were consistent in deciding whether a pair of images was more similar than another pair of images.

Fig. 7: Examples of pairs for the diversity experiment for a) playground and b) bedroom. Which pair shows the most similar images? The bottom pairs were chosen in these examples. c) Histogram of relative diversity per each category (88 categories) and dataset. Places (in blue line) contains the most diverse set of images, then ImageNet (in red line) and the lowest diversity is in the SUN database (in yellow line) as most images are prototypical of their class.

Fig. 7.c shows the histograms of relative diversity for all the 88 scene categories common to the three databases. If the three datasets were identical in terms of diversity, the average diversity should be 2/3 for the three datasets. Note that this measure of diversity is a relative measure between the three datasets. In the experiment, users selected pairs from the SUN database to be the closest to each other of the time, while the pairs from the Places database were judged to be the most similar only on of the trials. ImageNet pairs were selected of the time.

The results show that there is a large variation in terms of diversity among the three datasets, showing Places to be the most diverse of the three datasets. The average relative diversity on each dataset is for Places, for ImageNet and for SUN. To illustrate, the categories with the largest variation in diversity across the three datasets were playground, veranda and waiting room.

3 Convolutional Neural Networks for Scene Classification

Given the impressive performance of the deep Convolutional Neural Networks (CNNs), particularly on the ImageNet benchmark [1, 12], we choose three popular CNN architectures, AlexNet [1], GoogLeNet [19], and VGG 16 convolutional-layer CNN [20], then train them on Places205 and Places365-Standard respectively to create baseline CNN models. The trained CNNs are named as PlacesSubset-CNN, i.e., Places205-AlexNet or Places365-VGG.

All the Places-CNNs presented here were trained using the Caffe package

[21] on Nvidia GPUs Tesla K40 and Titan X222All the Places-CNNs are available at https://github.com/metalbubble/places365. Additionally, given the recent breakthrough performances of the Residual Network (ResNet) on ImageNet classification [22], we further fine-tuned ResNet152 on the Places365-Standard (termed as Places365-ResNet) and compared it with the other trained-from-scratch Places-CNNs for scene classification.

3.1 Results on Places205 and Places365

After training the various Places-CNNs, we used the final output layer of each network to classify the test set images of Places205 and SUN205 (see [2]). The classification results for Top-1 accuracy and Top-5 accuracy are listed in Table I. As a baseline comparison, we show the results of a linear SVM trained on ImageNet-CNN features of 5000 images per category in Places205 and 50 images per category in SUN205 respectively.

Places-CNNs perform much better than the ImageNet feature+SVM baseline while, as expected, Places205-GoogLeNet and Places205-VGG outperformed Places205-AlexNet with a large margin due to their deeper structures. To date (Oct 2, 2016) the top ranked results on the test set of Places205 leaderboard333http://places.csail.mit.edu/user/leaderboard.php is 64.10% on Top-1 accuracy and 90.65% on Top-5 accuracy. Note that for the test set of SUN205, we didn’t fine-tune the Places-CNNs on the training set of SUN205, as we directly evaluated them on the test set of SUN.

Test set of Places205 Test set of SUN205
Top-1 acc. Top-5 acc. Top-1 acc. Top-5 acc.
ImageNet-AlexNet feature+SVM 40.80% 70.20% 49.60% 80.10%
Places205-AlexNet 50.04% 81.10% 67.52% 92.61%
Places205-GoogLeNet 55.50% 85.66% 71.6% 95.01%
Places205-VGG 58.90% 87.70% 74.6% 95.92%
SamExynos 64.10% 90.65% - -
SIAT MMLAB 62.34% 89.66% - -
TABLE I: Classification accuracy on the test set of Places205 and the test set of SUN205. We use the class score averaged over 10-crops of each test image to classify the image. shows the top 2 ranked results from the Places205 leaderboard.

We further evaluated the baseline Places365-CNNs on the validation set and test set of Places365 shown in Fig.II. Places365-VGG and Places365-ResNet have similar top performances compared with the other two CNNs444The performance of the ResNet might result from fine-tuning or under-training, as the ResNet is not trained from scratch.. Even if Places365 has 160 more categories than Places205, the Top-5 accuracy of the Places205-CNNs (trained on the previous version of Places [2]) on the test set only drops by  2.5%.

Fig.8 shows the responses to examples correctly predicted by the Places365-VGG. Most of the Top-5 responses are very relevant to the scene description. Some failure or ambiguous cases are shown in Fig.9: Broadly, we can identify two kinds of misclassification given the current label attribution of Places: 1) less-typical activities happening in a scene, such as taking group photo in a construction site and camping in a junkyard; 2) images composed of multiple scene parts, which make one ground-truth scene label not sufficient to describe the whole environment. These illustrate the need to have multi-ground truth labels for describing environments.

It is important to emphasize that for many scene categories the Top-1 accuracy might be an ill-defined measure: environments are inherently multi-labels in terms of their semantic description. Different observers will use different terms to refer to the same place, or different parts of the same environment, and all the labels might fit well the description of the scene. This is obvious in the examples of Fig.9. Future development of the Places database, and the Places Challenge, will explore to assign multiple ground truth labels or free-form sentences to images to better capture the richness of visual descriptions inherent to environments.

Validation Set of Places365 Test Set of Places365
Top-1 acc. Top-5 acc. Top-1 acc. Top-5 acc.
Places365-AlexNet 53.17% 82.89% 53.31% 82.75%
Places365-GoogLeNet 53.63% 83.88% 53.59% 84.01%
Places365-VGG 55.24% 84.91% 55.19% 85.01%
Places365-ResNet 54.74% 85.08% 54.65% 85.07%
TABLE II: Classification accuracy on the validation set and test set of Places365. We use the class score averaged over 10-crops of each testing image to classify the image.
Fig. 8: The predictions given by the Places365-VGG for the images from the validation set. The ground-truth label (GT) and the top 5 predictions are shown. The number beside each label indicates the prediction confidence.
Fig. 9: Examples of predictions rated as incorrect in the validation set by the Places365-VGG. GT states for ground truth label. Note that some of the top-5 responses are often not wrong per se, predicting semantic categories near by the GT category. See the text for details.

3.2 Web-demo for Scene Recognition

Based on the Places-CNN we trained, we created a web-demo for scene recognition555http://places.csail.mit.edu/demo.html, accessible through a computer browser or mobile phone. People can upload photos to the web-demo to predict the type of environment, with the 5 most likely semantic categories, and relevant scene attributes. Two screenshots of the prediction result on the mobile phone are shown in Fig.10. Note that people can submit feedback about the result. The top-5 recognition accuracy of our recognition web-demo in the wild is about 72% (from the 9,925 anonymous feedbacks dated from Oct.19, 2014 to May 5, 2016), which is impressive given that people uploaded all kinds of photos from real-life and not necessarily places-like photos (these results are for Places205-AlexNet as the back-end prediction model in the demo).

Fig. 10: Two screenshots of the scene recognition demo based on the Places-CNN. The web-demo predicts the type of environment, the semantic categories, and associated scene attributes for uploaded photos.

3.3 Generic Visual Features from ImageNet-CNNs and Places-CNNs

We further used the activation from the trained Places-CNNs as generic features for visual recognition tasks using different image classification benchmarks. Activations from the higher-level layers of a CNN, also termed deep features, have proven to be effective generic features with state-of-the-art performance on various image datasets [23, 24]. But most of the deep features are from the CNNs trained on ImageNet, which is mostly an object-centric dataset.

Here we evaluated the classification performances of the deep features from object-centric CNNs and scene-centric CNNs in a systematic way. The deep features from several Places-CNNs and ImageNet-CNNs on the following scene and object benchmarks are tested: SUN397 [13], MIT Indoor67 [25], Scene15 [26], SUN Attribute [27], Caltech101 [28], Caltech256 [29], Stanford Action40 [30], and UIUC Event8 [31].

All of the experiments follow the standards in those papers. In the SUN397 experiment [13], the training set size is 50 images per category. Experiments were run on 5 splits of the training set and test set given in the dataset. In the MIT Indoor67 experiment [25], the training set size is 100 images per category. The experiment is run on the split of the training set and test set given in the dataset. In the Scene15 experiment [26], the training set size is 50 images per category. Experiments are run on 10 random splits of the training set and test set. In the SUN Attribute experiment [27], the training set size is 150 images per attribute. The reported result is the average precision. The splits of the training set and test set are given in the paper. In Caltech101 and Caltech256 experiment [28, 29], the training set size is 30 images per category. The experiments are run on 10 random splits of the training set and test set. In the Stanford Action40 experiment [30], the training set size is 100 images per category. Experiments are run on 10 random splits of the training set and test set. The reported result is the classification accuracy. In the UIUC Event8 experiment [31], the training set size is 70 images per category and the test set size is 60 images per category. The experiments are run on 10 random splits of the training set and test set.

Places-CNNs and ImageNet-CNNs have the same network architectures for AlexNet, GoogLeNet, and VGG, but they are trained on scene-centric data and object-centric data respectively. For AlexNet and VGG, we used the 4096-dimensional feature vector from the activation of the Fully Connected Layer (

fc7) of the CNN. For GoogLeNet, we used the 1024-dimensional feature vector from the response of the global average pooling layer before softmax producing the class predictions. The classifier in all of the experiments is a linear SVM with the default parameter for all of the features.

Deep Feature SUN397 MIT Indoor67 Scene15 SUN Attribute Caltech101 Caltech256 Action40 Event8 Average
Places365-AlexNet 56.12 70.72 89.25 92.98 66.40 46.45 46.82 90.63 69.92
Places205-AlexNet 54.32 68.24 89.87 92.71 65.34 45.30 43.26 94.17 69.15
ImageNet-AlexNet 42.61 56.79 84.05 91.27 87.73 66.95 55.00 93.71 72.26
Places365-GoogLeNet 58.37 73.30 91.25 92.64 61.85 44.52 47.52 91.00 70.06
Places205-GoogLeNet 57.00 75.14 90.92 92.09 54.41 39.27 45.17 92.75 68.34
ImageNet-GoogLeNet 43.88 59.48 84.95 90.70 89.96 75.20 65.39 96.13 75.71
Places365-VGG 63.24 76.53 91.97 92.99 67.63 49.20 52.90 90.96 73.18
Places205-VGG 61.99 79.76 91.61 92.07 67.58 49.28 53.33 93.33 73.62
ImageNet-VGG 48.29 64.87 86.28 91.78 88.42 74.96 66.63 95.17 77.05
Hybrid1365-VGG 61.77 79.49 92.15 92.93 88.22 76.04 68.11 93.13 81.48
TABLE III: Classification accuracy/precision on scene-centric databases (the first four datasets) and object-centric databases (the last four datasets) for the deep features of various Places-CNNs and ImageNet-CNNs. All the accuracy/precision is the top-1 accuracy/precision.

Table III summarizes the classification accuracy on various datasets for the deep features of Places-CNNs and the deep features of the ImageNet-CNNs. Fig.11 plots the classification accuracy for different visual features on the SUN397 database over different numbers of training samples per category. The classifier is a linear SVM with the same default parameters for the two deep feature layers (C=1) [32]. The Places-CNN features show impressive performance on scene-related datasets, outperforming the ImageNet-CNN features. On the other hand, the ImageNet-CNN features show better performance on object-related image datasets. Importantly, our comparison shows that Places-CNN and ImageNet-CNN have complementary strengths on scene-centric tasks and object-centric tasks, as expected from the type of the datasets used to train these networks. On the other hand, the deep features from the Places365-VGG achieve the best performance (63.24%) on the most challenging scene classification dataset SUN397, while the deep features of Places205-VGG performs the best on the MIT Indoor67 dataset. As far as we know, they are the state-of-the-art scores achieved by a single feature + linear SVM on those two datasets. Furthermore, we merge the 1000 classes from the ImageNet and the 365 classes from the Places365-Standard to train a VGG (Hybrid1365-VGG). The deep feature from the Hybrid1365-VGG achieves the best score averaged over all the eight image datasets.

Fig. 11: Classification accuracy on the SUN397 Dataset. We compare the deep features of Places365-VGG, Places205-AlexNet (result reported in [2]), and ImageNet-AlexNet, to those hand-designed features. The deep features of Places365-VGG outperforms other deep features and hand-designed features in large margins. Results of other hand-designed features/kernels are fetched from [13].

3.4 Visualization of the Internal Units and the CNNs

Through the visualization of the units responses for various levels of network layers, we can have a better understanding of what has been learned inside CNNs and what are the differences between the object-centric CNN trained on ImageNet and the scene-centric CNN trained on Places given that they share the same architecture (here we use AlexNet). Following the methodology in [33], we estimated the receptive fields of the units in the Places-CNN and ImageNet-CNN. Then we segmented the images with high unit activation using the estimated receptive fields. The image segmentation results by the receptive fields of units from different layers are shown in Fig.12. We can see that from pool1 to pool5, the units detect visual concepts from low-level edge/texture to high-level object/scene parts. Furthermore, in the object-centric ImageNet-CNN there are more units detecting object parts such as dog and people’s heads in the pool5 layer, while in the scene centric Places-CNN there are more units detecting scene parts such as bed, chair, or buildings in the pool5 layer.

Thus the specialty of the units in the object-centric CNN and scene-centric CNN yield very different performances of generic visual features on a variety of recognition benchmarks (object-centric datasets vs scene-centric datasets) in Table III.

Fig. 12: a) Visualization of the units’ receptive fields at different layers for the ImageNet-CNN and Places-CNN. Subsets of units at each layer are shown. In each row we show the top 3 most activated images. Images are segmented based on the estimated receptive fields of the units at different layers of ImageNet-CNN and Places-CNN. Here we take ImageNet-AlexNet and Places205-AlexNet as the comparison examples. See the detailed visualization methodology in [33].

We further synthesized preferred input images for the Places-CNN by using the image synthesis technique proposed in [34]. This method uses a learned prior deep generator network to generate images which maximize the final class activation or the intermediate unit activation of the Places-CNN. The synthetic images for 50 scene categories are shown in Fig.13. These abstract image contents reveal the knowledge of the specific scene learned and memorized by the Places-CNN: examples include the buses within a road environment in the bus station, and the tents surrounded by forest-types of features for the campsite. Here we used Places365-AlexNet (other Places365-CNNs generated similar results). We further used the synthesis technique to generate the images preferred by the units in the pool5 layer of Places365-AlexNet. As shown in Fig.14, the synthesized images are very similar to the segmented image regions by the estimated receptive field of the units.

Fig. 13: The synthesized images preferred by the final output of Places365-AlexNet for 50 scene categories.
Fig. 14: The synthesized images preferred by the pool5 units of the Places365-AlexNet corresponds to the segmented images by the receptive fields of those units. The synthetic images are very similar to the segmented image regions of the units. Each row of the segmented images correspond to one unit.

4 Conclusion

From the Tiny Image dataset [35], to ImageNet [11] and Places [2], the rise of multi-million-item dataset initiatives and other densely labeled datasets [36, 37, 38, 39] have enabled data-hungry machine learning algorithms to reach near-human semantic classification of visual patterns, like objects and scenes. With its high-coverage and high-diversity of exemplars, Places offers an ecosystem of visual context to guide progress on currently intractable visual recognition problems. Such problems could include determining the actions happening in a given environment, spotting inconsistent objects or human behaviors for a particular place, and predicting future events or the cause of events given a scene.

Acknowledgments

The authors would like to thank Santani Teng, Zoya Bylinskii, Mathew Monfort and Caitlin Mullin for comments on the paper. Over the years, the Places project was supported by the National Science Foundation under Grants No. 1016862 to A.O and No. 1524817 to A.T; ONR N000141613116 to A.O; as well as MIT Big Data Initiative at CSAIL, Toyota, Google, Xerox and Amazon Awards, and a hardware donation from NVIDIA Corporation, to A.O and A.T. B.Z is supported by a Facebook Fellowship.

References

  • [1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks.” in In Advances in Neural Information Processing Systems, 2012.
  • [2] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning deep features for scene recognition using places database,” in In Advances in Neural Information Processing Systems, 2014.
  • [3] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” 2014.
  • [4]

    S. Hochreiter and J. Schmidhuber, “Long short-term memory,”

    Neural computation, 1997.
  • [5] C. D. Manning and H. Schütze, Foundations of statistical natural language processing.   MIT Press, 1999.
  • [6] M. Campbell, A. J. Hoane, and F.-h. Hsu, “Deep blue,” Artificial intelligence, 2002.
  • [7] D. Ferrucci, A. Levas, S. Bagchi, D. Gondek, and E. T. Mueller, “Watson: beyond jeopardy!” Artificial Intelligence, 2013.
  • [8] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot et al., “Mastering the game of go with deep neural networks and tree search,” Nature, 2016.
  • [9] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, 1998.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proc. CVPR, 2015.
  • [11] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proc. CVPR, 2009.
  • [12] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, 2015.
  • [13] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Large-scale scene recognition from abbey to zoo,” in Proc. CVPR, 2010.
  • [14] G. A. Miller, “Wordnet: a lexical database for english,” Communications of the ACM, vol. 38, no. 11, pp. 39–41, 1995.
  • [15] P. Jolicoeur, M. A. Gluck, and S. M. Kosslyn, “Pictures and names: Making the connection,” Cognitive psychology, 1984.
  • [16] A. Torralba and A. A. Efros, “Unbiased look at dataset bias,” in Proc. CVPR, 2011.
  • [17] C. Heip, P. Herman, and K. Soetaert, “Indices of diversity and evenness,” Oceanis, 1998.
  • [18] E. H. Simpson, “Measurement of diversity.” Nature, 1949.
  • [19] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” Proc. CVPR, 2015.
  • [20] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [21] Y. Jia, “Caffe: An open source convolutional architecture for fast feature embedding,” http://caffe.berkeleyvision.org/, 2013.
  • [22] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. CVPR, 2016.
  • [23] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, “Decaf: A deep convolutional activation feature for generic visual recognition,” 2014.
  • [24] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “Cnn features off-the-shelf: an astounding baseline for recognition,” arXiv preprint arXiv:1403.6382, 2014.
  • [25] A. Quattoni and A. Torralba, “Recognizing indoor scenes,” in Proc. CVPR, 2009.
  • [26] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in Proc. CVPR, 2006.
  • [27] G. Patterson and J. Hays, “Sun attribute database: Discovering, annotating, and recognizing scene attributes,” in Proc. CVPR, 2012.
  • [28] L. Fei-Fei, R. Fergus, and P. Perona, “Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories,” Computer Vision and Image Understanding, 2007.
  • [29] G. Griffin, A. Holub, and P. Perona, “Caltech-256 object category dataset,” 2007.
  • [30] B. Yao, X. Jiang, A. Khosla, A. L. Lin, L. Guibas, and L. Fei-Fei, “Human action recognition by learning bases of action attributes and parts,” in Proc. ICCV, 2011.
  • [31] L.-J. Li and L. Fei-Fei, “What, where and who? classifying events by scene and object recognition,” in Proc. ICCV, 2007.
  • [32] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin, “LIBLINEAR: A library for large linear classification,” 2008.
  • [33] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Object detectors emerge in deep scene cnns,” International Conference on Learning Representations, 2015.
  • [34] A. Nguyen, A. Dosovitskiy, T. Yosinski, Jason band Brox, and J. Clune, “Synthesizing the preferred inputs for neurons in neural networks via deep generator networks,” arXiv preprint arXiv:1605.09304, 2016.
  • [35] A. Torralba, R. Fergus, and W. T. Freeman, “80 million tiny images: A large data set for nonparametric object and scene recognition,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2008.
  • [36] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European Conference on Computer Vision.   Springer, 2014, pp. 740–755.
  • [37] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, “Semantic understanding of scenes through the ade20k dataset,” arXiv preprint arXiv:1608.05442, 2016.
  • [38] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” Int’l Journal of Computer Vision, 2010.
  • [39] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” arXiv preprint arXiv:1604.01685, 2016.