Visual Graphs from Motion (VGfM): Scene understanding with object geometry reasoning

07/16/2018 ∙ by Paul Gay, et al. ∙ Istituto Italiano di Tecnologia 0

Recent approaches on visual scene understanding attempt to build a scene graph -- a computational representation of objects and their pairwise relationships. Such rich semantic representation is very appealing, yet difficult to obtain from a single image, especially when considering complex spatial arrangements in the scene. Differently, an image sequence conveys useful information using the multi-view geometric relations arising from camera motion. Indeed, in such cases, object relationships are naturally related to the 3D scene structure. To this end, this paper proposes a system that first computes the geometrical location of objects in a generic scene and then efficiently constructs scene graphs from video by embedding such geometrical reasoning. Such compelling representation is obtained using a new model where geometric and visual features are merged using an RNN framework. We report results on a dataset we created for the task of 3D scene graph generation in multiple views.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1:

Overview of the Visual Graphs from Motion (VGfM) approach for 3D scene graph generation from multiple views. As input, an object detector extracts and matches 2D bounding boxes from objects in multiple images. The 3D position and occupancy of each object are estimated and, in parallel, visual features are extracted from each bounding box. These elements are then used to predict a 3D graph where each edge defines the semantic relationship between a pair of ellipsoids

The ability to automatically generate semantic relationships between objects in a scene is useful in numerous fields. As such, in recent years there has been a significant amount of research toward this goal [22, 25, 36, 8, 21] leading to the proposal of encoding relationships using a scene graph [16].

Common approaches for constructing scene graphs utilize visual appearance to guide the process, relying mainly on extracted Convolutional Neural Network (CNN) features. However, CNN visual features fail to encode spatial relationships due to their invariance properties 

[28]. This is further compounded when considering complex 3D scenes where relationship predicates can become ambiguous and not easily solvable from a single view. This is exemplified in Fig. 1, considering the image the plant could be, ‘near the wall’ or ‘supported by the wall’. This ambiguity can be rectified through the understanding provided by adjacent images, resolving for the camera pose and predicting the near predicate – ‘plant near the wall’, as the support predicate would be related to the shelf. In this paper we aim to encode the required information using the knowledge of the 3D geometry of the objects in the scene.

We therefore propose to combine the advantages of both visual and geometric information to efficiently predict spatial relations between objects as shown in Fig. 1

. Given a set of 2D object bounding box detections matched across a sequence of images, using multi-view relations we compute the 3D locations and occupancies of objects described as a set of 3D ellipsoids. At the same time, we extract visual features from each object detection to model their visual appearance. These two representations are given as input to a Recurrent Neural Network (RNN) which has learned to predict a coherent scene graph where the objects are vertices and their relationships are edges. Overall, our Visual Graphs from Motion (VGfM) approach is appealing as it combines geometric and semantic understanding of an image, which has been a long term goal in computer vision 

[1, 29, 26, 31, 4, 33, 13]. We demonstrate the effectiveness of such a representation by creating a new dataset for 3D scene graph evaluation that is derived from the data provided in ScanNet [6]. To summarize, our contributions in this paper are:

  • To define the problem and the model related to the computation of 3D scene graph representations across multiple views;

  • To extract reliable geometric information in multiple views, we propose an improved geometric method able to estimate objects position and occupancy in 3D, modelled as a set of quadrics;

  • Finally, to provide a new real world dataset, built over ScanNet, which can be used to learn and evaluate 3D scene graph generation in multiple views 111Code and data can be found at: https://github.com/paulgay/VGfM.

The paper is structured as follows. Relevant literature to the VGfM approach is reviewed in Sec. 2. We outline our refined strategy for object 3D position and occupancy in Sec. 3, then present VGfM and its learning procedure in Sec. 4. The dataset is described in Sec. 5 with detailed evaluation of VGfM performance and the benefit of geometry refinement. We then conclude the paper in Sec. 6.

2 Related work

We now review the 3 topics related to our approach: scene graph generation from images, classification from videos and 3D object occupancy estimation.

Early works on visual relation detection were training classifiers to detect each relation in an image independently from each other 

[10, 22]. However, a scene graph often contains chains of relationships for instance: A man HOLDING a hand BELONGING TO a girl. Intuitively, a model able to leverage on this fact should obtain more coherent scene graphs. To account for this, Xu et al.  [34] proposed a model which explicitly defines a 2D scene graph. The framework naturally deals with chains of relations because inference is performed globally over all the objects and their potential relations. To this end, a message passing framework was developed using standard RNNs. This is in line with current approaches which combine the strengths of graphical models and neural networks [20, 37]. Each object and relation represents a node in a two layer graph and is modelled by the hidden state of an RNN. The state of each node is refined by the messages sent from adjacent nodes. This architecture has the flexibility of graphical models and thus can be used to merge heterogeneous sources of information such as text and images [19]. We utilize this mechanism in our model while extending it to incorporate the 3D geometry. To the best of our knowledge, this is the first time that geometric reasoning is exploited for scene graph generation.

Object detection within a sequence (video) is largely still reliant on temporal confidence aggregation across image detections or applying RNN for temporal memory [32]. With the difficulty of predicting confidence within a CNN [24] these approaches rely on detection consistency. Alternatively, more advanced video tubelets in T-CNN [17] are optimized for the detection confidence. In a similar way, we exploit the multiple view information within our model by including a fusion mechanism based on message passing across images.

Recently, new techniques have emerged to estimate the 3D spatial layout of the objects as well as their occupancy [27, 11, 2]

. These techniques rely on the quality of deep learning object detectors

[27, 11] or the use of additional range data [2]. Similarly volumetric approaches have been used to construct the layout of objects in rooms, or construct objects and regress their positioning [33]. These strategies provide alternative representations for scene graph generation since they associate object labels to the 3D structure of the scene, but lack the relationships required to construct a scene graph. In particular the approach localization from Detection (LfD) [27] leverages 2D object detector information to obtain the 3D position and the occupancy of a set of objects represented through quadrics. Although ellipsoids are an approximation of the region occupied by an object, they provide the necessary support for spatial reasoning in a closed form which can be efficiently computed. However, in the current methods [27, 12], there is no explicit constraint to enforce the quadric to be a valid ellipsoid. As a consequence, low baselines and inaccurate bounding boxes might result in degenerate quadrics. In the next section, we present an extension named LfD with Constraints (LfDC) which is based on linear constraints on the quadric centers. It has the advantage of being a fast closed-form solution while being more robust than LfD [27].

3 Robust object representation with 3D quadrics

Even if they are an approximate representation of objects, a representation based on ellipsoids (or formally quadrics) can be embedded in the graph effectively with multiple views (as described in Sec. 4). In this section, we briefly consider the prior work for generating quadrics from multi-view images, then resolve for their limitations so making the approach more suitable for scene graph construction.

Let us consider a set of image frames representing a 3D scene under different viewpoints. A set of rigid objects is placed in arbitrary positions. We assume that each object is detected in at least images. Each object in each image frame is given by a symmetric matrix which represents an ellipse inscribed in the bounding box as shown in Fig. 1 (left & top middle). The aim is to estimate the matrix representing the 3D ellipsoid whose projection onto the image planes best fit the measured 2D ellipses . The relationship between and their reprojected conics is defined by the perspective camera matrices which are assumed to be known (i.e. the camera is calibrated). The LfD method described in [27] solves the problem in the dual space where it can be linearized as:

(1)

where is a scaling factor, the

-vector

is the vectorised conic of the object in image , the -vector is the vectorised quadric and the matrix contains the elements of the camera projection matrix after linearization222the supplemental material provides more mathematical details about this step. Then, stacking column-wise Eq. (15) for , with , we obtain:

(2)

where denotes a column vector of zeros of length , and the matrix and the vector are defined as follow:

(3)

where contains the scale factors of the object for the different frames.

Since the object detector can be inaccurate, it makes sense to find the quadric by solving the following minimization problem:

(4)

where the equality constraint avoids the trivial zero solution. The solution of this problem consists in computing the on the

matrix and taking the right singular vector associated to the minimum singular value.

Figure 2: (a) are example ellipse reprojections of LfD [27] in red and LfDC in blue with cross and triangle respectively for the centers. (b, c) is the point cloud, 3D quadrics (using same color labelling), camera poses for two camera, the ground truth is shown in green. It can be seen that the proposed solution overcomes the limitation of [27].

However, the algebraic minimization in Eq. (23) does not enforce the obtained quadric to be a valid ellipsoid. As can be seen in Fig. 2, fitted ellipsoids can be inaccurate despite giving a reasonable 2D projection. The proposed LfDC solution generates ellipsoids as Fig. 2c, and in turn improves overall performance.

A common indication of the estimated quadric being degenerate can be fairly guessed by checking where the estimated ellipse center is located. If the center is outside the boundaries of the estimated ellipse contour, this clearly points out to a degenerate configuration. Given this last observation, rather than constraining directly the solution to lie in a valid ellipsoid parameter space, we include a set of equations imposing the reprojection of the center of the 3D ellipsoid being closer to the centers of the ellipses. This can be done by adding an additional set of rows in the matrix used in Eq. (20).

This constraint can be added by observing that the center parameters of the vectorised dual quadric appear separately in linear terms333We refer to supplemental material for further mathematical details. at position 4, 7 and 9 in the vector. The same fact holds for the vectorised conic at positions 3 and 5 (we omit indexes to simplify the notation):

(5)

where and contain the centers of the ellipse and the ellipsoid respectively. We can use this fact to directly include the equations which enforce the ellipsoid center to be projected in the centers of the ellipses. Given a frame and an object , the constrained equations are:

(6)

with the matrix defined as:

where each value corresponds to an element of the camera matrix . These equations are included in the system of Eq. (20) by replacing the matrix by such that:

(7)

where the matrix is defined as follow:

(8)

The solution of this new system can then be obtained with the SVD of the matrix as done for the minimization problem described in Eq. (23). This method, named LfDC, has both the effect of reducing the number of degenerated quadrics (i.e. to localize more objects in the scene) and to improve the quality of object localizations and occupancy estimation as it will be shown in the experimental section. For these reasons, LfDC also enables to improve the performances when estimating the scene graphs using multi-view relations.

4 Scene graphs from multiple images

The VGfM approach models the scene graph within a tri-partite graph which takes as input the features, both visual and geometric (from Sec. 3), and outputs the prediction of the object labels and predicates, as illustrated on Fig 3. The graph merges geometric and visual information, as well as refining jointly the state of all the objects and their relationships. This process is performed iteratively over each of the images of the sequence.

Therefore, let denotes the tri-partite graph of a current image. We define as the set of nodes that corresponds to attributes, defined as related to geometry, objects and relationships respectively, while refers to pairwise edges which connect each object with its relation. The set of object nodes is denoted as and models their semantic states. Similarly, models the semantic states of the relationships and is defined as . Finally, is the set of geometric nodes constructed over the quadrics previously computed expressing the geometric state of each relation (see Sec. 4.1 for construction).

The states of the graph are then iteratively refined by message passing among the nodes, exchanging information about their respective hidden states (see Sec. 4.2). The hidden states (resp. ) of each relation node (and resp. object node

) are modelled with Gated Recurrent Units 

[3] (GRU). This allows each node to refine its state by exploiting incoming messages from its neighbors. Differently from the object and relation nodes, each geometric node is considered as an observation and its state is fixed, this allows the reliability of the geometric information to be enforced. If the geometric nodes are removed from the graph, we obtain the framework of [34].

After iterations of message passing the hidden states from the object and relation nodes are used to compute the classification decision, i.e. object and relation labels, as provided by the final fully connected layer. This layer takes as input the hidden state of a relation node and produces a distribution over the relation labels through a softmax, this step is performed to compute the object labels as well. We treat predicate labels as in the multi-label scenario where a predicate is detected for a given relation if the softmax score is higher than the label indicating its absence. We further outline the training specifics of the model in Sec. 4.4. With the creation of the scene graph the next image in the sequence is then processed.

As our goal is to share information between images, we can encourage sharing beyond object and relation nodes and pass messages between images within the sequence. This can be simply performed by connecting tri-partite graph nodes among images and this process is explained in Sec. 4.3.

Figure 3: Our scene graph generation algorithm takes as input a sequence of images with a set of object proposals (as ellipsoids). In addition, visual features are extracted for each of the bounding boxes, these features are then fed to initialise the GRU object and relation nodes. A tri-partite graph connects the object, relation and geometric nodes and iterative message passing updates the hidden states of the object and relation nodes. At the conclusion of the message passing the scene graph is predicted by the network and then the next image of the sequence is processed.

4.1 Construction of the geometric nodes

As described in Sec. 3, we obtain a set of ellipsoids from the object detections. We then extract the 3D coordinates of the center of each quadric and the six points at the extremities of its main axis. Finally, the geometric encoder takes as input the coordinates extracted from the ellipsoids and in order to produce the state of the geometric node

. This encoder consists of a multi-layer perceptron with two fully connected layers of sizes

. These values were identified empirically to give enough capacity to the network to link both the quadric positions and occupancies and the given complexity of the final labels. We additionally experimented with a bag-of-word based encoding, proposed in [25], and found similar performances.

4.2 Message passing between nodes

The refinement of the hidden states is carried out via message passing. At each inference iteration, messages are sent along the graph edges. Each relation node is linked by undirected edges to the object state nodes , and the corresponding geometric node . We use the message pooling scheme proposed in [34].

At each iteration, the node receives the following message:

(9)

where denotes the concatenation operator,

is the sigmoid function,

is the set of all the relations where object is present at the right of the predicate, and the weights and are learned. The relationship nodes are also updated, where each node receives the following message:

(10)

where , and are learned parameters.

As with loopy belief propagation, this can be seen as an approximation of an exact global optimization, enabling the refinement of each hidden state based on its context. Conversely to a classic message passing scheme, the last inference decision on the label values is not performed within the tri-partite graph but by using a last fully connected layer. On average in our experiments, the inference time is 0.25 second per image on a Tesla K80.

4.3 Sharing information among multiple images

We now extend the proposed single image model to fuse information among the images of the sequence. In this case, the visual features can be shared where the network benefits from taking into account potential appearance changes as well as aiding consistency among the views. To this end, we rely on the message passing mechanism and include cross-image links which connect the tri-partite graphs for each image. As shown in Fig. 4, each relation node receives messages from all the nodes modelling the same relation in the other images. The same principle is applied for the object nodes.

Figure 4: This figure displays how the graphs operating on a single frame (same as shown on Fig. 3) are linked by the fusion mechanism.

We extend the notation so that it refers to nodes and messages image by image. Let us denote by (resp. ) the message that the relation node (resp. the object node ) appearing in the image receives from the other images. Then, we compute the messages with the following equations:

(11)
(12)

where and are learned weights. This formulation can be seen as weighted average of the visual features were the weights are learned as an attention mechanism. This new cross-image message is then added to the local one described in Sec. 4.3 to form the final message.

4.4 Learning

Our model is trained with cross-entropy loss. We also use similar hyper-parameters to Xu et al. [34] with a learning rate of and iterations of message passing. Batches of 8 images were used for the single image system. For the multi-image approach described in Sec. 4.3, each batch corresponds to one image sequence. We reduce the sequence to images selected uniformly to save memory space. In contrast to [34], we retain all region proposals as we are considering the ellipsoid proposal that already prunes the per-frame object proposals. We extract visual features from VGG-16 [30] pretrained on MS-COCO and use the layer to initialize the hidden states of the RNNs. The RNNs are trained while keeping the weights of the visual features fixed. Two sets of shared weights are optimized during training: one for the objects and one for the relations. The state of the GRU for both input and output has a dimension of .

5 Dataset description and experimental evaluation

Prior datasets for the scene graph generation problem are based on singular images with relationship annotations, but in general they do not have multi-view image sequences necessary to exploit the proposed model. We thus create GraphScanNet by manually extending and upgrading the ScanNet dataset [6] with relationships between the annotated objects. The ScanNet dataset provides million views in more than scans annotated with semantic and instance level segmentation. 3D camera poses are also provided as estimated from an online 3D reconstruction pipeline (BundleFusion [7]) algorithm run on the RGB-D images. Since VGfM does not require depth, we also tried a visual SLAM algorithm [23], but we found that the results were not accurate enough.

Although one thousand object categories are present, we refine the list of objects to resolve for annotator errors and the frequency of object occurrences in sequences resulting in a refined list of object categories. Our annotations are a set of view-independent compositional relationships between couples of 3D objects. Our proposed predicates are inspired by Visual Genome [18], but we opt for a concise set that is loosely aimed to encompass many relationships that can occur within the sequences. It can be seen from the ScanNet class labels that when annotators are given expressive freedom in labels, cultural or personal bias can make annotations implausible for learning systems where many objects are synonyms or localized vernaculars. Our predicates are as follows:

Part-of: A portion or division of a whole that is separate or distinct; piece, fragment, fraction, or section , e.g. shelf is part-of a bookcase.

Support: To bear or hold up (a load, mass, structure, part, etc.); serve as a foundation for. Where hypernyms could be considered support from behind, below, hidden; in our case ‘below’ is most prevalent.

Same-plane: Belonging to the same or near similar vertical plane in regards to the ground normal. As an example, a table might be same-plan as a chair.

Same-set: Belonging to a group with similar properties of function. The objects could define a region, e.g. in Fig. 6 the table, chair and plate belong to the same set whereas the shoes on the floor are separated. This is similar to the concept of scenario recently studied in [9], and where they proved this being a powerful clue for scene understanding.

As relationships are derived from images, there are differences in terms of number of instances for each predicate. Same-set and Same-plane appear about times in the images, whereas Support times and Part-of only times. This has an impact on the performances as explained in the evaluation.

The 3D object segmentation enables us to construct a 3D ground-truth (GT) by fitting ellipsoids to each object mesh. Object bounding box in 2D are also computed by projecting each object point cloud into the image. Such bounding boxes are created by fitting a rectangle that encloses the set of 2D points. We then automatically extracted 2000 sequences coming from 700 different rooms with at least 4 objects in each of them. These sequences are challenging for 3D reconstruction, since the recording of the rooms was done by rotating the camera with limited translation motions. On average, the angle spanned by the camera trajectory is degrees.

5.1 Evaluation of the quadric estimation

Figure 5: Evaluation in terms of accuracy, translation and axes length errors for the LfD [27] method and our proposed approach LfDC.

We first evaluate how accurate are the quadrics obtained from the different methods. We run the original LfD method [27] on the extracted sequences and compared with the ones from our LfDC approach. On the sequences, we measured that only 48% of the quadrics estimated by LfD are valid ellipsoids. This number rises up to 60% when we use our LfDC method. This validates our initial hypothesis that the additional equations are useful to avoid non-valid quadrics. In the following, we evaluate the accuracy of the ellipsoids by considering only the ones who are found valid by all methods.

One of the main limitations of LfD is the sensibility when the image sequences have a short baseline (i.e. short camera path and/or very few image frames). To study this effect, the error and accuracy values are plotted in function of the maximum angle spanned by the camera during the sequence where the object is recorded. In Fig. 5, we compare the methods according to three metrics: , which is the intersection over union between the proposed and the GT quadrics, the translation and the axis length errors.

We can see that the LfDC outperforms the previous LfD method in terms of the three metrics: volume overlap, translation and axis length error. The constraints on the centers are beneficial to improve on these three aspects since the solution is still computed globally for all the quadric parameters. Secondly, we observe that, although relatively small in average, the improvements are important in case of a low baseline.

5.2 Evaluation on the scene graph classification task

We evaluate our systems on the tasks of object and relation classification i.e. given the bounding boxes of a pair of objects, the system must predict both the object classes and the predicate of their relation. These two tasks encompass the problem of scene graph generation when performed recursively over the image where annotations are performed in terms of multi-label fashion i.e. presence and absence. We selected 400 rooms for training, 150 as a validation set and 150 for testing.

We first study the influence of the quadric estimation algorithms. We run our VGfM and use as input the ellipsoids provided by LfD, LfDC and GT quadrics. Results are reported in Table 1.

GT LfD[27] LfDC
Object label 76% 75% 75%
Same-plane 75% 72% 74%
Same-set 62% 59% 61%
Support 69% 64% 67%
Part-of 69% 65% 69%
Table 1: Comparison of the use of different quadrics to classify the scene graphs. The numbers in bold are related to the best results LfD and LfDC.

We can see that the differences between the different methods are relatively small, but still coherent with the accuracy reported in Fig. 5. The LfD obtains the worst results and the best performing method is LfDC. Overall, the use of GT quadrics brings an additional improvement, but the accuracy remains relatively close to the other methods.

We now study the influence of the different components of the system in an ablation study in Table 2. The baseline [34] uses only visual appearance. The method VGfM-2D corresponds to a variation of our method without 3D information where we computed the geometric states from the coordinates of the 2D bounding boxes instead of using the ellipsoids. To evaluate the potential of using geometry alone, we also report results while using only the geometric encoder described in Sec. 4.1

. A softmax layer is appended to this encoder in order to use it as classifier. The resulting network is then trained from scratch for the tasks of predicting predicates and object labels. VGfM + Fusion corresponds to the addition of the fusion mechanism over multiple images described in Sec. 

4.3.

[34] VGfM-2D Geometric encoder VGfM VGfM +Fusion
Object label 74% 74% 58% 75% 76%
Same-plane 74% 74% 70% 74% 78%
Same-set 58% 59% 55% 61% 62%
Support 62% 64% 85% 67% 64%
Part-of 68% 69% 80% 69% 59%
Table 2: This table shows the accuracy for the prediction of each predicate and the object labels. The numbers in bold are the best performing methods.

The results of the baseline method do not exceed of accuracy, which suggests that this task is difficult especially for the high level Same-set predicate. As shown on the second column, augmenting the appearance with the 2D coordinates allows VGfM-2D to obtain an improvement of as it is commonly observed in computer vision for this kind of feature augmentation.

The results of the geometric encoder shows large differences between the tasks. The low performance for the task of object classification is not surprising as a 3D bounding box alone carries little information about the object label. Regarding the results for the predicate prediction, we tested the same architecture but providing the GT ellipsoids and found only a difference of depending on the predicates. It is thus possible that some errors are due to partly segmented objects in the annotations, resulting in inaccurate bounding boxes. We also observe that results are higher than any other method for the predicates Support and Part-of. One explanation is that these two predicates are less frequent in the dataset (respectively 3000 and 600 instances compared to around 30000 for the other classes). In these cases with less training data, having a more simple, shallower architecture with a reduced number of parameters helps. Unfortunately, standard data augmentation techniques such as cropping or shifting cannot be directly applied to augment the number of samples as they would introduce incoherences with the 3D geometry.

The proposed single image VGfM method has a better or similar accuracy than the methods which do not use 3D information. This suggests that the information contained in the ellipsoids is beneficial for predicting relationships and that our model is able to use it. We can draw similar conclusions for the fusion mechanism. Indeed the fusion mechanism shows improvements for the predicates which are common on the dataset. For these cases, the model successfully manages to leverage on the different sources of information to reach an improvement of accuracy of with respect to the initial baseline. However, it fails to improve for the ones which contain only a few training examples. This effect should be more important for the fusion mechanism since in this case, one training sample corresponds to a sequence of 10 images. Thus the number of instances in the training data is roughly divided by 10.

Fig. 6 shows some qualitative results of two image sequences coming from the same room.

Figure 6: The top row shows images extracted from two sequences together with the bounding box detections in yellow and in white the conics used to estimate the ellipsoids. The second row shows the resulting ellipsoids of these two sequences as well as the global object layout of the room. The third row shows the corresponding scene graphs obtained with our proposed approach. We did not display all the relations to ease the visualization. Predicates in bold brown font are miss-detections and bold green font with dashed line are false alarms (best viewed in color).

On the left, the model successfully identified the two sets of objects, and it detects that the two cabinets are on the same plane. Since perspective effects are strong, this reasoning would be difficult with 2D features only. The right part is a complex scene with many overlapping objects. Although some errors are still present, leveraging over multiple views provides, as a 3D graph, a rich description of the scene which could enable further high level reasoning.

6 Conclusions

We addressed the problem of generating a 3D scene graph from multiple views of a scene. The VGfM approach leverages both geometry and visual appearance and it learns to refine globally the features and to merge the different sources of information through a message passing framework. We have evaluated on a new dataset which focuses on the relationships in 3D and show that our method outperforms a 2D baseline method.

The problem of creating a scene graph in both 2D and 3D from multiple views has been addressed for the first time in this paper, however there are many areas to be explored that can enhance performances. First, other sources of knowledge could be used. In particular, [35] shows that the manifold of the scene-graph is rather low dimensional as many of them contain recurrent patterns. This suggests that a strong prior could be built to encode this topology. Secondly, the knowledge about the visual appearance and the semantic relationships could be used to refine the geometric nodes by refining the quality of the ellipsoids. Last but not least, the case of dynamic scene could be investigated. As the predictions of our model are done per image, it can be readily applied on this setting.

References

  • [1]

    Bao, S.Y., Bagra, M., Chao, Y.W., Savarese, S.: Semantic structure from motion with points, regions, and objects. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2703–2710. IEEE (2012)

  • [2] Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3d object detection network for autonomous driving. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 211–219. IEEE (2017)
  • [3] Cho, K., Van Merriënboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 (2014)
  • [4] Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In: European Conference on Computer Vision (ECCV). Springer (2016)
  • [5] Crocco, M., Rubino, C., Del Bue, A.: Structure from motion with objects. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 782–788. IEEE (2016)
  • [6] Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: Scannet: Richly-annotated 3d reconstructions of indoor scenes. In: Computer Vision and Pattern Recognition. (CVPR). pp. 2075–2084. IEEE (2017)
  • [7] Dai, A., Nießner, M., Zollöfer, M., Izadi, S., Theobalt, C.: Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface re-integration. Transactions on Graphics 2017 (TOG) (2017)
  • [8] Dai, B., Zhang, Y., Lin, D.: Detecting visual relationships with deep relational networks. In: Conference on Computer Vision and Pattern Recognition. (CVPR). pp. 3298–3308 (2017)
  • [9] Daniels, Z.A., Metaxas, D.N.: Scenarios: A new representation for complex scene understanding. arXiv preprint arXiv:1802.06117 (2018)
  • [10] Desai, C., Ramanan, D., Fowlkes, C.: Discriminative models for static human-object interactions. In: Computer vision and pattern recognition workshops (CVPR). pp. 9–16. IEEE (2010)
  • [11] Dong, J., Fei, X., Soatto, S.: Visual inertial semantic scene representation for 3d object detection. In: Conference on Computer Vision and Pattern Recognition. (CVPR). pp. 782–790. IEEE (2017)
  • [12] Gay, P., Rubino, C., Bansal, V., Del Bue, A.: Probabilistic structure from motion with objects (psfmo). In: International Conference on Computer Vision. (ICCV). pp. 3075–3084. IEEE (2017)
  • [13] Hane, C., Zach, C., Cohen, A., Pollefeys, M.: Dense semantic 3d reconstruction. Pattern Analysis and Machine Intelligence. (PAMI) 39(9), 1730–1743 (2017)
  • [14] Hartley, R., Zisserman, A.: Multiple view geometry in computer vision. Cambridge university press (2003)
  • [15] Henderson, H.V., Searle, S.: Vec and vech operators for matrices, with some uses in jacobians and multivariate statistics. CJS (1979)
  • [16]

    Johnson, J., Krishna, R., Stark, M., Li, L.J., Shamma, D., Bernstein, M., Fei-Fei, L.: Image retrieval using scene graphs. In: Conference on Computer Vision and Pattern Recognition. (CVPR). pp. 3668–3678. IEEE (2015)

  • [17] Kang, K., Ouyang, W., Li, H., Wang, X.: Object detection from video tubelets with convolutional neural networks. In: Conference on Computer Vision and Pattern Recognition. (CVPR). pp. 1744–1756. IEEE (2016)
  • [18] Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.J., Shamma, D.A., Bernstein, M.S., Fei-Fei, L.: Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision. (IJCV) 123(1), 32–73 (2017)
  • [19] Li, Y., Ouyang, W., Zhou, B., Wang, K., Wang, X.: Scene graph generation from objects, phrases and region captions. In: Conference on Computer Vision and Pattern Recognition. (CVPR). pp. 1261–1270. IEEE (2017)
  • [20] Liang, X., Shen, X., Feng, J., Lin, L., Yan, S.: Semantic object parsing with graph lstm. In: European Conference on Computer Vision. (ECCV). pp. 125–143. Springer (2016)
  • [21] Liao, W., Shuai, L., Rosenhahn, B., Yang, M.Y.: Natural language guided visual relationship detection. arXiv preprint arXiv:1711.06032 (2017)
  • [22] Lu, C., Krishna, R., Bernstein, M., Fei-Fei, L.: Visual relationship detection with language priors. In: European conference in computer vision. (ECCV). pp. 852–869. Springer (2016)
  • [23] Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: Orb-slam: a versatile and accurate monocular slam system. Transactions on Robotics 31(5), 1147–1163 (2015)
  • [24] Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: Computer Vision and Pattern Recognition (CVPR). pp. 1544–1556. IEEE (2015)
  • [25]

    Peyre, J., Laptev, I., Schmid, C., Sivic, J.: Weakly-supervised learning of visual relations. In: International Conference on Computer Vision. (ICCV). pp. 3194–3204. Springer (2017)

  • [26] Reddy, N.D., Singhal, P., Chari, V., Krishna, K.M.: Dynamic body vslam with semantic constraints. In: International Conference on Intelligent Robots. (ICIR). pp. 1897–1904. IEEE (2015)
  • [27] Rubino, C., Crocco, M., Del Bue, A.: 3d object localisation from multi-view image detections. Pattern Analysis and Machine Intelligence. (PAMI) 40(6), 1281–1294 (2018)
  • [28] Sabour, S., Frosst, N., Hinton, G.E.: Dynamic routing between capsules. In: Neural Information Processing Systems. NIPS, pp. 3856–3866 (2017)
  • [29] Sengupta, S., Greveson, E., Shahrokni, A., Torr, P.H.S.: Urban 3d semantic modelling using stereo vision. In: International Conference on Robotics and Automation. pp. 580–585. IEEE (2013)
  • [30] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  • [31] Sung, M., Kim, V.G., Angst, R., Guibas, L.: Data-driven structural priors for shape completion. ACM Transactions on Graphics (TOG) 34(6),  175 (2015)
  • [32] Tripathi, S., Lipton, Z.C., Belongie, S.J., Nguyen, T.Q.: Context matters: Refining object detection in video with recurrent neural networks. In: British Machine vision conference. (BMVC). pp. 1723–1731. BMVA (2016)
  • [33] Tulsiani, S., Gupta, S., Fouhey, D., Efros, A.A., Malik, J.: Factoring shape, pose, and layout from the 2d image of a 3d scene. In: Computer Vision and Pattern Regognition (CVPR). IEEE (2018)
  • [34] Xu, D., Zhu, Y., Choy, C.B., Fei-Fei, L.: Scene graph generation by iterative message passing. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3097–3106. IEEE (2017)
  • [35] Zellers, R., Yatskar, M., Thomson, S., Choi, Y.: Neural motifs: Scene graph parsing with global context. In: Conference on Computer Vision and Pattern Recognition. CVPR. pp. 3294–3304. IEEE (2018)
  • [36] Zhang, H., Kyaw, Z., Chang, S.F., Chua, T.S.: Visual translation embedding network for visual relation detection. In: Conference on Computer Vision and Pattern Recognition. (CVPR). pp. 4–12. IEEE (2017)
  • [37] Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, C., Torr, P.H.: Conditional random fields as recurrent neural networks. In: International Conference on Computer Vision. (ICCV). pp. 1529–1537. Springer (2015)

Appendix 0.A LfD approach: from primal to dual space

As stated in the paper, we are interested in finding the 3D ellipsoids whose projections onto the image planes best fit the 2D ellipses . Each ellipse correspond to the detection of the object in the image , with and . The relationship between and their reprojected conics is defined by the perspective camera matrices which are assumed to be known. The projection of conics from 3D to 2D is a complex operation in the primal space however this operation is highly simplified in the dual space. We will now describe the steps which show that the problem can be linearised.

The dual quadric is defined by the matrix , where is the adjoint operator, and the dual conic is defined by [14]. Considering that the dual conic , like the primal one, is defined up to an overall scale factor , the relation between a dual quadric and its dual conic projections can be written as:

(13)

In order to recover in closed form from the set of dual conics , Eq. (13) is re-arranged into a linear system. Let us define and as the vectorisation of symmetric matrices and respectively444The operator serialises the elements of the lower triangular part of a symmetric matrix, such that, given a symmetric matrix , the vector , defined as , is with .. Then, let us arrange the products of the elements of and in a unique matrix as follow [15]:

(14)

where is the Kronecker product and matrices and are two matrices such that and respectively, where and are two symmetric matrices555The operator serialises all the elements of a generic matrix.. This matrix is defined as:

Given , we can rewrite Eq. (13) as [27]:

(15)

which corresponds to the Eq. (1) used in our ACCV 2018 submission.

Appendix 0.B Derivation of the constraints on the conics/quadrics centers in LfDC

0.b.1 Selecting conic translational components

We first consider the expression of the dual conic . Every conic can be centered in the image center with normalised axes length using the transformation matrix as follow:

(16)

with:

(17)

where and are the coordinates of the ellipse center and , where are the two semi axes of the ellipse. Using Eqs. (16) and (17) we can express the vectorised conic as:

We can see that the third and fifth components of the vector, i.e. , contain the two translation parameters as stated in the Eq. (5) of the ACCV 2018 submission.

0.b.2 Selecting quadric translational components

The same procedure can be applied to the quadrics in dual space but this time defining the matrix giving:

(18)

where is an ellipsoid centered on the origin and with the axes aligned to the 3D coordinates and is an homogeneous transformation accounting for an arbitrary rotation and translation. The matrices and can be written respectively as:

(19)

where is the 3D translation vector, is the rotation matrix function of the Euler angles and are the three semiaxes of the ellipsoid666The positivity of grants that represents an ellipsoid and not a generic quadric.. Therefore, we can express every ellipsoid in terms of the nine parameters . Now, defining the vector as we can evaluate a functional form of the vector as follow:

where the terms in are the entries of the rotation matrix . We can see again that the fourth, seventh and ninth components of the vector contain the translation paramater as defined by Eq. (5), i.e. , of the ACCV 2018 submission.

0.b.3 Selecting matrix components

Similarly, we can extract the components of the matrix related to the centers of both conics and quadrics. Before that, notice that if we apply the centering and normalisation as given in Eq. (18) this has an effect on the projection relation in Eq. 13. In practice, we obtain that the form of the matrix given the centering is now given by Eq. (0.B.3) [5]. Now, we need to select the components related to the translational elements of the conics and quadrics, i.e. the rows (3, 5). In practice, this results in selecting specific elements of the projective camera matrix .

In order to be consistent with the solution of the linear system (which is the objective function of LfD):

(20)

where,

(21)

we have that the final matrix is given by:

The elements , , can be now plugged into the matrix giving:

(22)

The solution of the linear system:

(23)

provides the solution of the LfDC problem.