Indoor Scene Recognition in 3D

02/28/2020 ∙ by Shengyu Huang, et al. ∙ ETH Zurich 0

Recognising in what type of environment one is located is an important perception task. For instance, for a robot operating in indoors it is helpful to be aware whether it is in a kitchen, a hallway or a bedroom. Existing approaches attempt to classify the scene based on 2D images or 2.5D range images. Here, we study scene recognition from 3D point cloud (or voxel) data, and show that it greatly outperforms methods based on 2D birds-eye views. Moreover, we advocate multi-task learning as a way of improving scene recognition, building on the fact that the scene type is highly correlated with the objects in the scene, and therefore with its semantic segmentation into different object classes. In a series of ablation studies, we show that successful scene recognition is not just the recognition of individual objects unique to some scene type (such as a bathtub), but depends on several different cues, including coarse 3D geometry, colour, and the (implicit) distribution of object categories. Moreover, we demonstrate that surprisingly sparse 3D data is sufficient to classify indoor scenes with good accuracy.



There are no comments yet.


page 1

page 5

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

An autonomous agent’s behaviour strongly depends on what type of environment they are in: the knowledge that we are, say, in a kitchen and not in a bathroom changes what set of possible actions we consider, what objects we expect in the environment and where we search for them, and even how we navigate and move around. The fact that the type of environment is such a strong prior for elementary behaviours suggests that an autonomous robot should have the ability to determine what environment it is in. I.e., it should have the ability to perform scene recognition based on its sensory input.

Here we are concerned with indoor scene recognition, where the set of scenes is a list of different room types defined mostly by the function they serve, such as kitchen, bedroom, hallway, etc.111Note that indoor scene recognition is also useful for a robot that operates in a more diverse set of environments, since it is efficient to follow a coarse-to-fine hierarchy, e.g., first determine whether one is outdoors, in a residential building or in an industrial facility; then switch to dedicated, more fine-grained scene categories. Especially indoors, the scene type depends on a variety of cues, including for instance the global geometry (e.g., corridors), the presence of specific objects (e.g., a washing machine), and the relative placement of objects (e.g., chairs around a dining table vs. a single chair at a desk). In most cases it is not obvious which visual properties of a scene are most discriminative – is it the size? the 3D shape? specific textures? individual objects?

Fig. 1: We demonstrate improved classification of indoor scenes by working with subsampled or voxelised 3D point clouds. Multi-task learning together with semantic segmentation further boosts performance.

Scene recognition was first explored as a 2D image classification problem [15, 41]. For indoor scenes, it has also been suggested to operate in 2.5D, using RGB-D images from range cameras or stereo rigs [16, 1]

. On the contrary, to our knowledge there have not been any attempts to perform scene classification in 3D point cloud or voxel data. This is somewhat surprising, given that the scene, and in most cases also (part of) the sensory input of a robot, are 3-dimensional. We attribute this gap to two reasons. First, machine learning is computationally more expensive in 3D, in particular efficient deep learning architectures for 3D point cloud or voxel data have only been developed in the last three years. Second, other than for 2D images and 2.5D range scans, there are no suitable public datasets. So far the only available benchmark we are aware of is ScanNet 

[6] with a modest 1613 scans covering 21 different scene types, a small fraction of today’s 2D image [8], range image [42], and video [21] datasets.

In this paper we take a first step to close the gap and study scene recognition based on 3D data. An important observation in that context is the following: What is “small” about the datasets is the number of scenes, and consequently the information content of the ground truth, 1000 integer labels. Whereas the dataset, designed also for semantic segmentation, has much richer per-point labels as well. We argue that the pointwise labels constitute valuable, and much more plentiful side information to guide the learning towards semantically meaningful features. To that end, we employ multi-task learning, where semantic segmentation is learned as an auxiliary task during training, with a shared encoder for both the segmentation and scene classification tasks, see Fig. 1. For both the encoder and the semantic segmentation decoder we use sparse 3D learning architectures that allow for efficient processing on standard hardware, since 3D scenes have a high degree of sparsity (because most of the volume is empty). Our 3D encoder-decoder network sets a greatly improved state of the art for scene type classification on ScanNet, compared to previous 2D methods. Moreover, we find that the multitask learning boosts the classification performance even further, to an overall accuracy of 90.3%. We also perform ablation studies to disentangle the influence of geometry, object semantics and colour, and the influence of 3D point density. We find that the different cues are somewhat complementary. Without object information, scene geometry does the heavy lifting, although using also colour can improve the classification performance, in some cases significantly. On the other hand, the distribution of object classes in the scene, without any geometry, is also a surprisingly good predictor. Moreover, our experiments indicate that scene recognition is not simply the recognition of single, distinctive objects, as one might suspect. For most scene types, removing individual object classes has only a small effect. Somewhat unexpectedly, classification performance drops only a little if one severely downsamples the input point cloud: representing an entire room with 1024 randomly sampled points is sufficient to all but match the performance on the full point cloud. That the scene type can indeed be predicted quite well from a rough “overall glance” at the environment supports our high-level strategy to infer it early on, so as to customise more fine-grained perception and action tasks.

Ii Related work

Ii-a Scene recognition in 2D

Scene recognition from 2D images has been investigated since at least [15], and several datasets are available, including Scene15 [24], MIT Indoor67 [32], SUN397 [41] and Places476 [44]. Early works use handcrafted features to capture discriminative scene characteristics. Oliva et al. [27] propose a set of perceptual dimensions named spatial envelope (e.g. naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Gokalp et al. [12] cluster the regions of an image partitioning to obtain a codebook of region types, then classify based on the bag of individual and paired region types. Bosch et al. [2]

apply probabilistic Latent Semantic Analysis (pLSA) to discover “topics” from a bag of visual words and train a multi-way classifier on the topic distribution vectors. Li

et al. [25] represent an image as a scale-invariant response map of a large number of pre-trained generic object detectors named ”Object Bank”.

Recent methods employ deep convolutional neural network (CNN) features. Zhou

et al. [44] introduce a bigger scene-centric dataset, Places476, and use a CNN to learn features for scene recognition. Herranz et al. [19]

efficiently combine scene-centric Places-CNNs and object-centric ImageNet-CNNs by taking the feature scales into consideration. Cheng

et al. [4]

rely on object detectors and represent images via the occurrence probabilities of discriminative objects contained in them.

Ii-B Scene recognition in 2.5D

Especially in indoor scenes, depth can be a valuable additional cue for scene recognition. Silberman et al. [33] introduce the view-centric NYU dataset of RGB-D images and show that the combination of depth and intensity significantly improves scene understanding. Song et al. [34] introduce the larger SUN RGB-D dataset to support data-hungry learning methods.

Like in the case of 2D images, early works use handcrafted features [16, 1]

while the current state-of-the-art are deep features from neural networks. Wang

et al. [38] encode distributions of CNN features computed from RGB-D data using the Fisher vector representation, to allow for greater spatial flexibility. Zhu et al. [45] first extract CNN features from RGB and depth separately, then fuse them through a multi-modal layer which considers both inter- and intra-modality correlations and regularises the learned features. Song et al. [35] address the problem of limited range of depth sensors by learning from RGB-D videos that contain comprehensive, accumulated depth information, using convolutional RNNs. Du et al. [10] formulate modality-specific scene recognition and cross-modal translation under a multi-task learning setting, explicitly regularizing the recognition task by training a joint encoder.

Ii-C Deep learning in 3D

3D data can be represented by multiple images with different viewing directions [36, 20], as voxel grids [26, 30], point clouds [29, 31], or meshes [7, 11]. Here, we limit ourselves to point clouds and voxel grids.

Pointnet [29]

was a seminal work for deep learning directly from point clouds. The basic idea is to apply a shared bank of MLPs to the coordinates and attributes of each individual point, as approximations of point kernels to extract point-wise features, then use global max-pooling to abstract to a global representation in a permutation-invariant manner. By design, Pointnet does not capture local structures induced by the metric space the points live in, making it difficult to deal with geometric detail. To overcome this limitation, the follow-up version Pointnet++ 

[31] applies Pointnet recursively on a nested partitioning of the input point set, to hierarchically aggregate local information into a compact and fine-grained representation.

Voxels are a straight-forward 3D generalisation of pixels, but point clouds from 3D sensors are sparse by nature. Instead of applying 3D convolutions on the full volumetric occupancy grid [26]

, sparse CNNs store the non-empty voxels as sparse tensors and only performs convolutions on such sparse coordinate lists. Graham

et al. [14] introduce a CNN which takes sparsity into account, but is limited to small resolution ( voxels in their experiments) due to the decrease in sparsity after repeated convolution. To deal with the dilation of non-zero activations, Graham et al. [13] advocate the strategy to store the convolution output only at occupied voxels. Hackel et al. [17] explore feature sparsity by selecting only a fixed number of the highest activations. Choy et al. [5]

introduce MinkowskiEngine, an open-source auto-differentiation library that extends 

[13] to 4D spatio-temporal perception. It also proposes hybrid kernels with predefined sparsity patterns to mitigate the exponential increase of parameters.

Ii-D Multi-task learning in 3D

Deep models contain millions of parameters and require a large number of samples to supervise the learning. However, it is not always possible to collect large training sets, especially in applications where data cannot be gleaned from the internet and/or cannot be annotated by non-experts, like for example medical image analysis [43]. Also 3D scene recognition falls into the category of problems where massive supervision is difficult, due to the sheer effort of collecting 3D scans of many environments (where each scan only constitutes a single training example). In such cases, multi-task learning (MTL) [3] is a sensible way to exploit annotations for related tasks and transfer useful information to tasks with limited training data.

Multi-task learning aims to improve the performance of multiple related learning tasks by sharing information among them. Several authors jointly learn semantic segmentation and instance segmentation with a shared latent representation. For example, Wang et al. [39] jointly optimise semantic labeling and an embedding that discriminates individual object instances. A related approach, Pham et al. [28] fuses semantic labels and instance embeddings into a conditional random field and makes the final predictions with variational inference. Lahoud et al. [23] also tackle semantic segmentation and instance segmentation, with multi-task metric learning. To our knowledge, our work is the first to combine 3D scene recognition and semantic segmentation in the multi-task framework.

Iii Method

Iii-a Basic 3D learning framework

We treat scene recognition as a supervised classification problem and solve it with a neural network. The network consists of an encoder that transforms the input scene into a feature representation , followed by a classification head to compute the class-conditional likelihood . For the encoder, we explore two different options: networks that work with a subsampled version of the original point cloud as input (Pointnet [29], Pointnet++ [31], DGCNN [40]), and networks that work with a sparse voxel grid derived from the input points (Resnet14 [18] with sparse convolutions [5]).

Iii-B Multi-task learning

A limitation for scene recognition is the difficulty of collecting a large enough dataset for deep learning, where ”large” should be understood as the number of labels. Since an entire scene – in the indoor setting often a room – corresponds to a single example, it is hard to collect many examples per class, and indeed current datasets are limited on average 100 examples per class. Hence, we resort to multi-task learning. We train our network to, at the same time, also perform per-point semantic labeling of the point cloud, using the same, shared latent representation . While that auxiliary task is, arguably, at least as difficult as the scene-level classification, it benefits from a lot stronger supervision with point-wise labels. Therefore, it is reasonable to hope that it will improve the latent feature encoding , such that the encoding better supports scene classification, too. For the semantic segmentation we extend the sparse Resnet14 variant of our network with a U-net style decoder that mirrors the encoder, but with a dense set of skip connections (and therefore twice the channel depth of the encoder). The full architecture is depicted in Fig. 2.

Fig. 2: The proposed multi-task network architecture has a single encoder, but two output branches, a semantic segmentation head (top) and a scene classification head (bottom).

Iii-C Optimisation

In our multi-task setting the loss function consists of two terms, a cross-entropy loss

for the semantic segmentation and another cross-entropy loss for scene classification. The total loss is their weighted sum:


Some care is required to optimise that combined loss. In principle the parameter balances the two terms. However, due to their vastly different magnitudes the optimisation is very sensitive to the choice of . To see why, consider the huge scale difference – per one scene label there are thousands of point labels – and the fact that the loss itself is unaware which points belong to which scene: depending on the value of , flipping a single scene type label can offset many miss-classified points, spread over an entire batch of scenes. For our task, we found it best to start with the extremes of the weighting, i.e., first train each task separately. First, we set and optimise only for semantic segmentation, where the supervision is much more fine-grained. This will reliably find a reasonable feature encoding in the bottleneck. Then, we freeze the weights of the encoder and train only the scene classification head, effectively treating the semantic segmentation network as a pre-trained feature extractor. Finally, we fine-tune the network end-to-end, with a small learning rate. Empirically, this schedule turned out to yield the best results.

Iv Experiments

Iv-a Dataset

ScanNet [6] is an RGB-D video dataset containing 1500 scans. We use 1013 scans for training and 500 scans for validation. A further 100 scans are designated as test set for the benchmark, for these the ground truth is withheld. Fig. 3 shows examples for different scene types. On average, one scan has 150k points and a spatial extent of 5.5m 5.1m

2.4m. For the scene classification task, there are in total 21 classes, of which only a subset of 13 classes is evaluated in the benchmark (but we always predict all 21 logits). Fig. 

4 shows the class distribution for the reduced 13-class set. As it is quite imbalanced, we report both accuracy (fraction of correct classifications) and mean Intersection-of-Union.

(a) apartment (b) bedroom (c) living room (d) kitchen (e) conference room (f) copy room (g) library (h) office (i) hallway (j) storage (k) bathroom (l) laundry
Fig. 3: Examples of different scene types.
Fig. 4: Distribution of the 13 scene types within the training+validation sets.

Iv-B Baselines

As sanity check for the different deep learning models, we construct three baselines that completely ignore the spatial layout. First, we represent each scene by its 30-bin colour histogram and label the test scans with nearest-neighbour classification (which worked slightly better than a Random Forest classifier). This yields a classification accuracy of 57.1%, a fairly high success rate for such a naive scheme, which we regard as a lower bound for any more sophisticated approach. Second, we predict semantic labels with the U-net described above (without scene classification head), turn the maximum-likelihood labels into a normalised object class histogram and train a Random Forest classifier on those histograms. This achieves a strong 82.8% accuracy, indicating that the statistics over point labels without any spatial information encodes the scene type rather well. Finally, as an upper bound for these ”geometry-free” approaches we pretend to have a oracle for semantic segmentation and train the same Random Forest classifier on the ground truth label histogram s. This yields 85.0% accuracy, meaning that the errors of the semantic segmentation only slightly degrade the performance.

Iv-C Deep learning models

Iv-C1 Implementation details

Pointnets and DGCNN require a fixed number of points as input. Unless stated otherwise, we sample 4096 points per scan using Farthest Point Sampling for optimal coverage. For the sparse, voxel-based Resnet14 we fix the voxel size to 2cm. When points with different labels/colours fall into the same voxel, we randomly pick one. All models are trained with batch size 16 using the Adam [22] optimiser with base learning rate

. For vanilla Pointnet and DGCNN we use the cosine annealing learning rate scheduler. All models are trained for 300 epochs on a single GeForce GTX 1080Ti except DGCNN, for which we use 3 GPUs, as it is computationally expensive.

Iv-C2 Data augmentation

As mentioned, the number of scenes is limited, for several scene types there are 25 training exemplars. Therefore, we use multiple forms of data augmentation. We randomly translate the point cloud, rotate it around the vertical axis, scale it by random factors between 0.8 and 1.25, add Gaussian noise to the point coordinates, and randomly remove 12.5% of the points. Finally, we also randomly cut away points in one corner of the bounding box, inspired by cutout regularisation [9] for images. To ensure reproducibility, we publish all our code.222

Iv-D Results

The overall results on our validation set are displayed in Table I. Pointnet in its basic form and DGCNN perform worst, in particular Pointnet with only geometry and no colour information performs on par with the simple colour histogram baseline, while adding colour is particularly beneficial for Pointnet. Variants of Pointnet++ and sparse Resnet14 achieve much better results between 83.6% and 87.8%. Interestingly, Pointnet++ performs better without colour, while Resnet14 performs better with colour information. Note, the big gap between Pointnet and Pointnet++ indicates that locality and spatial layout play a role for scene recognition, as the global pooling of Pointnet is detrimental.

With 90.3% accuracy, our proposed multi-task learner achieves the best performance among these models. Its results can be directly compared to Resnet14, which has the same encoder and scene classification head, without the auxiliary semantic segmentation decoder and loss function. We have also submitted results of our best network to the ScanNet benchmark. The numbers differ significantly from those on our validation set, presumably due to the small size of the withheld test set (100 scans spread across 13 classes). Detailed per-class results are shown in Table II. The only two prior submissions were based on 2D (birds-eye) views and achieved an average recall of at most 49.8%, respectively a mean IoU of 35.5%. We outperform them by a large margin: with our sparse Resnet14 that solves the task directly in 3D, supported by multi-task learning, performance jumps to 70.0% recall, respectively 64.6% mean IoU.

Acc [%] mIoU [%] params [M]
Colour histogram 57.1 36.1 -
Point class histogram (pred) 82.8 70.2 -
Point class histogram (oracle) 85.0 73.9 -
Pointnet XYZ only 57.4 35.6 0.68
XYZ+colour 70.8 52.2 0.68
DGCNN XYZ only 77.2 54.5 1.80
XYZ+colour 77.0 62.1 1.81
Pointnet++ XYZ only 87.8 75.1 1.74
XYZ+colour 84.8 69.1 1.74
Resnet14 XYZ only 83.6 67.6 21.4
XYZ+colour 87.4 70.6 21.5
Multi-task XYZ+colour 90.3 77.2 35.3
TABLE I: Results of deep learning models.
Method apartment bathroom bedroom library
hallway kitchen
misc office storage
resnet50_scannet [6] 0.250 0.812 0.529 0.500 0.500 0.000 0.500 0.571 0.000 0.556 0.000 0.375 0.000 0.353
SE-ResNeXT-SSMA [37] 0.000 0.812 0.941 0.500 0.500 0.500 0.500 0.429 0.500 0.667 0.500 0.625 0.000 0.498
Ours 0.500 1.000 0.882 0.500 1.000 1.000 0.500 1.000 1.000 0.778 0.000 0.938 0.000 0.700
TABLE II: Detailed scene classification results on ScanNet dataset. Numbers reproduced from the evaluation server

V Analysis and discussion

We run a number of further experiments to analyse the 3D approach in more detail.

V-a Sparsity

The good performance of Pointnet-type methods, where the input is decimated to only 4096 points, raises the question how densely a scene must be sampled in order to classify it. We study the influence of point density by training and testing with different numbers of input points (without the semantic segmentation branch, since labeling individual points requires dense point clouds). In Fig. 5 we visually illustrate different point counts (for Pointnet++, we directly change the subsampling, for Resnet14 we first downsample, then voxelise). Results for scene type classification with different counts are shown in Fig. 6. Unexpectedly, both Pointnet++ and Resnet14 achieve over 75% accuracy even with only 128 points, and between 512 and 1024 points performance saturates around 85%. The high accuracy with very sparse point clouds extends across most classes, larger drops are observed only for the rather rare storage and copy room classes, and to some degree for library. See Fig. 7. To conclude, very sparse point clouds that are difficult to even interpret visually are sufficient to classify the scene type with high accuracy, with or without colour. In the context of robotics this is particularly interesting: a ”rough glance” at the environment is apparently enough to determine the scene type in many cases, supporting the strategy to perform scene recognition early on and invoke scene-specific algorithms for detailed reconstruction or interaction. That sparse ”first glance” could, for instance, come from SLAM keypoints, a low-resolution stereo matcher, or a fast overview scan.

V-B Geometry or semantics?

We do point out that, while we have just shown that few points are enough for stand-alone scene recognition, an important limitation is that the result can no longer be boosted with multi-task learning. Semantic segmentation necessitates high point density. This is nicely illustrated by the geometry-free Random Forest baseline. If we train the semantic segmentation network on downsampled point clouds with only 128 points, its object class predictions are wildly off, and scene type prediction based on their frequency histograms reaches only 42% accuracy. While still far from chance level, this is clearly not a useful result. On the contrary, predicting from 128 points with the true (oracle) object class labels reaches 81% accuracy. Note, at so low point counts the geometry-free approach has an advantage, since fewer samples are needed to capture the frequency distribution than to capture the scene shape. In summary, geometric layout goes a long way and is effective even with very small, sparse point clouds. Multi-task learning with semantic segmentation as auxiliary task can improve scene type classification further, but is needs high point density. It remains to be explored whether there are other auxiliary tasks that can operate at low density.

V-C Colour

Colour information is not always available for 3D scan data. Table I shows the difference between models trained with and without colour information for the points. For the stronger models we also show the influence of colour at different point densities in Fig. 6. While already a global colour histogram greatly beats random chance, and low-performing models like vanilla Pointnet get a significant boost from colour, the effect on the best-performing models is small (but rather consistent) – scene recognition does not seem to critically depend on colour. At least on ScanNet, colour information degrades the performance of Pointnet++, while it slightly improves Resnet14. A caveat is again that despite its limited impact, eschewing colour when it would be available may weaken multi-task learning, as semantic segmentation often does benefit significantly from colour.

(a) 32 (b) 128 (c) 512 (d) 1024 (e) 2048 (f) 4096 (g) 16184 (h) 81369
Fig. 5: Scene representation using different numbers of points.
Fig. 6: Scene classification results for different point densities.
Fig. 7: Recall per scene type with 128 and 4096 points as input.

V-D Influence of individual object classes

Intuitively, one might suspect that individual ”marker” objects are enough to determine the scene type, e.g., bathtubs appear only in bathrooms, beds point to bedrooms or apartments, bookshelves to libraries, etc. To check to what degree scene recognition degenerates to detecting individual objects, we remove the points belonging to each object class in turn from the test data (without changing the model trained on complete scenes). In Fig. 8 we show the difference in recall for each scene type if removing points of a certain class. The average magnitude of the changes is only 2 percent points, with few exceptions the absence of an individual object class changes the recall by 5 pp.

There are a few examples of marker objects – there are a pronounced drops for bedroom when removing the beds, and for laundry room when removing unknown furniture (likely due to some frequent object class that was not labeled separately, such as laundry baskets). On the contrary the hallway class benefits from removing chairs, seemingly this prevents confusions with other scene types. Coming back to a seemingly obvious example that we have mentioned before, removing the bathtub(s) does not change the recall of bathrooms at all. Whereas almost all classes suffer at least a bit from removing walls or floors, again pointing to the role of overall scene shape. While the presence of certain object types certainly plays a role, scene type classification is not only the search for a specific marker object, but appears to exploit more complex patterns such as co-occurrence and spatial layout of objects.

Fig. 8: Change of recall per scene type after removing all points with a certain object class label.

V-E Completeness

Mobile robots often cannot capture the complete scene due to occlusions or non-panoramic sensors. While an overall exploration that takes into account sensor-specific or application-specific occlusion patterns is beyond the scope of this paper, we do study tolerance to incomplete scenes with a simple cropping scheme: we cut out an axis-aligned corner of the scene bounding box with varying crop ratio, where the latter is defined as the fraction of the - and -axes that is retained (so, e.g., a crop ratio of 0.7 means that we retain a box that spans 70% of the bounding box in - and -direction, over the entire height of the scene). As can be seen in Fig. 9, accuracy drops to 58% at a crop ratio 0.5, corresponding to a scene of which only has been observed, and grows approximately linearly to reach 85% for the complete scene. In other words, there is some robustness, e.g., at crop ratio 0.8 (keeping of the original scene) the performance penalty is 5%. But overall, completeness matters. When larger coherent portions of the scene are missing the classification error increases fairly quickly. It is preferable to capture a coarse, but complete 3D view of the room rather than a detailed, but partial one. This further supports our case for working in 3D rather than classify based on 2D images or 2.5D range images. Note that we again only modified the test scenes. While cutout regularisation is somewhat similar to the described cropping, it may be possible to improve robustness against incomplete scenes with more sophisticated data augmentation schemes.

Fig. 9: Scene classification results w.r.t. the crop ratio of the scene.

V-F Hard samples

In total, we have 500 validation samples. 50 of them cannot be classified correctly by our best setup. From the confusion matrix in Fig. 

10 we can see that most of the miss-classified scenes are from the classes apartment (often mistaken for bedroom), living room and misc. The latter is an open rejection class for scenes of undefined type or whose reconstruction failed, thus difficult to learn. Moreover, the miss-classifications are often visually plausible, and several of them are ”near misses” where the correct class has the second-highest score (see Fig. 11).

Fig. 10: Confusion matrix calculated on validation dataset.
(a) (b) (c) (d) (e) (f) (g) (h)
Fig. 11: Miss-classified samples. The first column is the scene. The second column shows the class scores, true and predicted scene types.

Vi Conclusion

We have, to our knowledge for the first time, performed a systematic study of indoor scene recognition with 3D scene representations (point clouds or voxels). We found that working in 3D greatly improves scene recognition. It turns out that there are two different cues which both achieve high accuracy: on the one hand the global scene geometry alone reaches up to 87% accuracy on ScanNet, on the other hand also the distribution of semantic object labels by itself reaches 82.8%. Combining the two cues in a multi-task learning framework boosts the accuracy to 90.3%. This suggests two main modes of operation: When classifying purely based on geometry, only a coarse representation with few points is sufficient, making it possible to achieve a very reasonable accuracy quickly and with little data. When aiming for the best possible accuracy, a dense sampling of the scene (and per-point labels for the training data) is required to leverage point-wise semantic segmentation as an auxiliary task. While our results support the use of 3D representations, it remains an open question whether one must go all the way to irregular point clouds or voxels. It may be interesting to explore alternative representations that capture the global scene structure in an efficient 2D projection, such as for instance (sparse?) depth panoramas.


  • [1] D. Banica and C. Sminchisescu (2015) Second-order constrained parametric proposals and sequential search-based structured prediction for semantic segmentation in RGB-D images. In CVPR, Cited by: §I, §II-B.
  • [2] A. Bosch, A. Zisserman, and X. Muñoz (2008) Scene classification using a hybrid generative/discriminative approach. IEEE TPAMI 30 (4), pp. 712–727. Cited by: §II-A.
  • [3] R. Caruana (1997) Multitask learning. Machine Learning 28 (1), pp. 41–75. Cited by: §II-D.
  • [4] X. Cheng, J. Lu, J. Feng, B. Yuan, and J. Zhou (2018) Scene recognition with objectness. Pattern Recognition 74, pp. 474–487. Cited by: §II-A.
  • [5] C. Choy, J. Gwak, and S. Savarese (2019) 4d spatio-temporal convnets: minkowski convolutional neural networks. In CVPR, Cited by: §II-C, §III-A.
  • [6] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner (2017) ScanNet: richly-annotated 3d reconstructions of indoor scenes. In CVPR, Cited by: §I, §IV-A, TABLE II.
  • [7] M. Defferrard, X. Bresson, and P. Vandergheynst (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In NeurIPS, Cited by: §II-C.
  • [8] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In CVPR, Cited by: §I.
  • [9] T. DeVries and G. W. Taylor (2017) Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552. Cited by: §IV-C2.
  • [10] D. Du, L. Wang, H. Wang, K. Zhao, and G. Wu (2019) Translate-to-recognize networks for RGB-D scene recognition. In CVPR, Cited by: §II-B.
  • [11] G. Gkioxari, J. Malik, and J. Johnson (2019) Mesh R-CNN. In CVPR, Cited by: §II-C.
  • [12] D. Gokalp and S. Aksoy (2007) Scene classification using bag-of-regions representations. In CVPR, Cited by: §II-A.
  • [13] B. Graham and L. van der Maaten (2017) Submanifold sparse convolutional networks. arXiv preprint arXiv:1706.01307. Cited by: §II-C.
  • [14] B. Graham (2014) Spatially-sparse convolutional neural networks. arXiv preprint arXiv:1409.6070. Cited by: §II-C.
  • [15] A. Guérin-Dugué and A. Oliva (2000) Classification of scene photographs from local orientations features. Pattern Recognition Letters 21, pp. 1135–1140. Cited by: §I, §II-A.
  • [16] S. Gupta, P. Arbeláez, R. Girshick, and J. Malik (2015) Indoor scene understanding with RGB-D images: bottom-up segmentation, object detection and semantic segmentation. IJCV 112 (2), pp. 133–149. Cited by: §I, §II-B.
  • [17] T. Hackel, M. Usvyatsov, S. Galliani, J. D. Wegner, and K. Schindler (2018) Inference, learning and attention mechanisms that exploit and preserve sparsity in CNNs. In GCPR, Cited by: §II-C.
  • [18] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §III-A.
  • [19] L. Herranz, S. Jiang, and X. Li (2016) Scene recognition with CNNs: objects, scales and dataset bias. In CVPR, Cited by: §II-A.
  • [20] E. Kalogerakis, M. Averkiou, S. Maji, and S. Chaudhuri (2017) 3D shape segmentation with projective convolutional networks. In CVPR, Cited by: §II-C.
  • [21] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. (2017) The Kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: §I.
  • [22] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IV-C1.
  • [23] J. Lahoud, B. Ghanem, M. Pollefeys, and M. R. Oswald (2019) 3D instance segmentation via multi-task metric learning. In CVPR, Cited by: §II-D.
  • [24] S. Lazebnik, C. Schmid, and J. Ponce (2006) Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In CVPR, Cited by: §II-A.
  • [25] L. Li, H. Su, L. Fei-Fei, and E. P. Xing (2010) Object bank: a high-level image representation for scene classification & semantic feature sparsification. In NeurIPS, Cited by: §II-A.
  • [26] D. Maturana and S. Scherer (2015) Voxnet: a 3d convolutional neural network for real-time object recognition. In IROS, Cited by: §II-C, §II-C.
  • [27] A. Oliva and A. Torralba (2001) Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV 42 (3), pp. 145–175. Cited by: §II-A.
  • [28] Q. Pham, T. Nguyen, B. Hua, G. Roig, and S. Yeung (2019) JSIS3D: joint semantic-instance segmentation of 3d point clouds with multi-task pointwise networks and multi-value conditional random fields. In CVPR, Cited by: §II-D.
  • [29] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In CVPR, Cited by: §II-C, §II-C, §III-A.
  • [30] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. J. Guibas (2016) Volumetric and multi-view CNNs for object classification on 3d data. In CVPR, Cited by: §II-C.
  • [31] C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017) Pointnet++: deep hierarchical feature learning on point sets in a metric space. In NeurIPS, Cited by: §II-C, §II-C, §III-A.
  • [32] A. Quattoni and A. Torralba (2009) Recognizing indoor scenes. In CVPR, Cited by: §II-A.
  • [33] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus (2012) Indoor segmentation and support inference from RGBD images. In ECCV, Cited by: §II-B.
  • [34] S. Song, S. P. Lichtenberg, and J. Xiao (2015) SUN RGB-D: a RGB-D scene understanding benchmark suite. In CVPR, Cited by: §II-B.
  • [35] X. Song, S. Jiang, L. Herranz, and C. Chen (2018) Learning effective rgb-d representations for scene recognition. IEEE TIP 28 (2), pp. 980–993. Cited by: §II-B.
  • [36] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller (2015) Multi-view convolutional neural networks for 3d shape recognition. In CVPR, Cited by: §II-C.
  • [37] A. Valada, R. Mohan, and W. Burgard (2019) Self-supervised model adaptation for multimodal semantic segmentation. IJCV, pp. 1–47. Cited by: TABLE II.
  • [38] A. Wang, J. Cai, J. Lu, and T. Cham (2016) Modality and component aware feature fusion for RGB-D scene classification. In CVPR, Cited by: §II-B.
  • [39] X. Wang, S. Liu, X. Shen, C. Shen, and J. Jia (2019) Associatively segmenting instances and semantics in point clouds. In CVPR, Cited by: §II-D.
  • [40] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon (2019) Dynamic graph CNN for learning on point clouds. ACM TOG 38 (5), pp. 146. Cited by: §III-A.
  • [41] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba (2010) SUN database: large-scale scene recognition from abbey to zoo. In CVPR, Cited by: §I, §II-A.
  • [42] J. Xiao, A. Owens, and A. Torralba (2013) SUN3D: a database of big spaces reconstructed using SfM and object labels. In CVPR, Cited by: §I.
  • [43] Y. Zhang and Q. Yang (2017) A survey on multi-task learning. arXiv preprint arXiv:1707.08114. Cited by: §II-D.
  • [44] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva (2014) Learning deep features for scene recognition using places database. In NeurIPS, Cited by: §II-A, §II-A.
  • [45] H. Zhu, J. Weibel, and S. Lu (2016) Discriminative multi-modal feature fusion for RGBD indoor scene recognition. In CVPR, Cited by: §II-B.