Accurate and Robust Object-oriented SLAM with 3D Quadric Landmark Construction in Outdoor Environment

10/18/2021 ∙ by Rui Tian, et al. ∙ Northeastern University 0

Object-oriented SLAM is a popular technology in autonomous driving and robotics. In this paper, we propose a stereo visual SLAM with a robust quadric landmark representation method. The system consists of four components, including deep learning detection, object-oriented data association, dual quadric landmark initialization and object-based pose optimization. State-of-the-art quadric-based SLAM algorithms always face observation related problems and are sensitive to observation noise, which limits their application in outdoor scenes. To solve this problem, we propose a quadric initialization method based on the decoupling of the quadric parameters method, which improves the robustness to observation noise. The sufficient object data association algorithm and object-oriented optimization with multiple cues enables a highly accurate object pose estimation that is robust to local observations. Experimental results show that the proposed system is more robust to observation noise and significantly outperforms current state-of-the-art methods in outdoor environments. In addition, the proposed system demonstrates real-time performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Simultaneous Localization and Mapping (SLAM) is a fundamental technique in order for robots to perceive the environment. When compared with classic SLAM methods that use only the geometry of the scene, object-based SLAM has recently focused on creating maps with both geometry and high-level semantic objects within the environment[15, 12, 19, 20, 18, 11, 10, 5, 8]. This semantically-enriched information can help robots with target-oriented tasks like obstacle avoidance, robust relocalization and human-robot interaction. The improvement in the accuracy of semantic information acquisition, driven by deep learning networks[14, 2, 7], has led to the increasing introduction of object detection and semantic segmentation into visual SLAM systems to build semantically enriched maps and enhance the perception ability of robots.

Fig. 1: Our proposed method uses 3D quadric landmarks to build the object-oriented map in outdoor environments. Yellow quadrics illustrate the accuracy of orientation and shape of the estimated ellipsoids when projected onto the current image frame. Object ID is marked, and the magenta lines show the center of the ellipsoids in previous frames when projected onto the image frame, indicating the accuracy of the object data association. The red bounding box is the object detected as dynamic and will not be constructed with ellipsoids. Finally, the object maps are also provided.

Accurate object representation is a key issue in object-oriented SLAM research and 3D object models[16], cubic boxes[18, 20, 19] and ellipsoids[11, 10, 5] are among common methods utilized for object representation. Prior work like [20] and [18] use the cubic box to represent the object, where the pose of the cubic box can be estimated by vanishing points and rotation sampling. Compared with the cubic box, the ellipsoid can also accurately represent the position, orientation and size of the object and has a more concise mathematical representation[11]. In projective geometry the quadric can be represented by a symmetric matrix[4] where the compact perspective projection model and the closed surfaces of ellipsoids are meaningful for object landmarks.

The accuracy and robustness of current quadric-based SLAM are not ideal, especially the quadric initialization process, which is limited by the parameter coupling of the direct linear solution method[11] or the necessity for point cloud fitting[10, 8]. QuadricSLAM[11] is a recently proposed object-oriented SLAM system that represents objects as quadrics; a dual quadric observation model based on the object detection is proposed. However, the closed-form constrained dual quadric parameterization and the lack of observation angles under the planar trajectory of the mobile vehicle make the initialization of the quadric difficult and sensitive to observation noise. In [8], multiple constraints combined with points, surface and quadrics are used in the optimization framework, but the prior shape of the object is estimated based on deep learning which incurs a high computational complexity and is not robust. In [12], the texture plane and shape prior constraints are added to the quadric estimation which solves the problem of poor estimation performance when the observation angles change in road driving scenes. However, the assumption that the texture plane is parallel to the image plane during quadric initialization causes the estimation to be sensitive to noise.

In addition, in prior work such as [11, 13], data association methods have been proposed although they are typically not robust to outdoor scenes. Dynamic objects in outdoor scenes like moving cars and persons are a challenge for quadric estimation since false object associations will lead to false quadric initialization results.

To solve the aforementioned problems, we propose a robust and accurate quadric landmark initialization based on a method for decoupling of quadric parameters (DQP) and an object data association (ODA) algorithm in outdoor scenes. The robustness of DQP to observation noise is improved by independently estimating the quadric centroid translation and the yaw rotation constraint which is satisfied for autonomous vehicles in road planes in most cases. Then, an ellipsoid with improved accuracy can be obtained by a nonlinear optimizer combining the observation error, the texture plane error and the prior object size. In terms of data association, we propose a multiple-cues algorithm combined with the Hungarian assignment algorithm[9] which improves the robustness of object pose estimation.

We demonstrate the performance of the proposed system in both a simulation environment and using the KITTI Raw Data [6] datasets. The experimental results show that the proposed system is more robust to observation noise than other existing methods and improves the accuracy of the position, orientation and size of the object estimation in the outdoor environment.

The main contributions of this work are:

  • To effectively overcome the observation noise, we propose an accurate and robust quadric landmark initialization method based on the DQP algorithm by decoupling of translation and rotation of quadric centroids.

  • We proposed an ODA algorithm that combines the semantic inliers distribution, Kalman-based motion prediction, and ellipsoidal projection to achieve accurate object data association and object pose estimation.

  • Based on the proposed algorithms, we implement real-time stereo visual SLAM with accurate and robust ellipsoids representing objects, aiming to build an object-oriented and semantically-enhanced map for outdoor navigation.

Ii System Overview

Ii-a Mathematical Representation of a Quadric Model

For convenience of description, the notations used in this paper are as follows:

  • is the world coordinate, is the camera coordinate, is the reference camera coordinate of the object, and is the quadric center frame.

  • - The intrinsic matrix of a pinhole camera model.

  • - The transformation from world frame to camera frame, which is composed of a rotation and a translation .

  • - The camera projection matrix that contains intrinsic and extrinsic camera parameters.

  • - The 2D object detection bounding box (BBox).

  • is the segmentation instance mask, is the detection instance, is the object instance.

  • represent the detected object instance that is assigned to the object , and represent the class label of the detected instance and object instance respectively.

  • - The checking of image points that are located in the detection box.

  • - The quadric matrix in 3D space and is denoted as the dual quadric matrix.

  • - The 3-D plane surface in homogeneous coordinate and all quadric plane fulfil .

  • - The 9-D vector representing the attributes of the quadric, including axial length, translation and rotation.

When a dual quadric is projected onto an image plane, it creates a dual conic, following the rule . For more specific properties of the quadric, please refer to [11].

Fig. 2: Overview of our proposed system, there are two key modules: 1) The detection thread takes images and acquires semantic and detection results. 2) The tracking thread initializes quadrics with the DQP method and the ODA associates detected object quadrics with the mapped objects for further optimization. Finally, the object map is stored with ellipsoid representations.

Ii-B System Architecture

The proposed system is shown in Fig.2. We implement our algorithms on the basis of ORB-SLAM3 [3], and a stereo camera is used to obtain a metric scale of the estimated trajectory for the autonomous driving scene to avoid scale ambiguity caused by monocular SLAM[17]. However, we also highlight that our method can be used for monocular SLAM. There are two key modules, the visual SLAM module and the detection module. The visual SLAM module consists of parallel threads, including the tracking thread and the local mapping thread. Finally, the camera pose is estimated and a semantically-enhanced object map is also stored in the map database.

(1) The detection thread uses YOLOACT[14] to acquire semantic information from the left images of the stereo pair. The output results are object detection BBoxes and the instance segmentation masks.

(2) The tracking thread takes images and estimates the camera pose from consecutive frames. Meanwhile, the thread waits for the detection instances and associates them with the existing objects in the object map database or decides whether to create a new object using the ODA algorithm. In addition, if the current frame is a keyframe and an observation satisfies the quadric initialization condition, the DQP algorithm is used for robust and accurate quadric initialization.

(3) The local mapping thread optimizes the map points of keyframes with local bundle adjustment. In addition, when the objects are observed by newly inserted keyframes, the new observation can be added to the object optimizer for nonlinear optimization of the ellipsoidal representation of objects.

(4) The map Database stores the final maps, including the geometry information of map points and the object-oriented map with ellipsoids.

Iii Decoupling of Quadric Parameters Initialization Algorithm

Iii-a Decoupling of Quadric Central Translation

We present the mathematical analysis of the dual quadric parameters to illustrate the effect of the translation component on the estimation of rotation and shape. The dual form parameters of the ellipsoid can be decomposed by eigen-decomposition in the reference camera coordinates of the object:

(1)

where is the diagonal matrix composed of the squares of the quadric axial lengths, and is the quadric centroid translation in the reference camera coordinates. The parameters of the block matrix couple the rotation and translation of the quadric. Since the length of the quadric centroid translation is much larger than that of the rotation and axes, small errors in the estimation of the quadric centroid translation have a significant impact on the accurate estimation of the dual quadric matrix, which is why QuadricSLAM [11] is sensitive to observation noise.

We can also see from Eq.1 that the translation parameters are independent in dual form parameters , , . Therefore we estimate the translation component parameters independently to eliminate the effect of coupling parameters, a key aspect of our approach. We triangulate the center of the 2D detection box and obtain the triangulation map point , which is almost close with the quadric center in outdoor scenes. This assumption is proved by experiments in VI-A. Observations of two or more frames of detection centers form the overdetermined equation to solve ,

(2)

where, is the -th element of 2D detection center, is the -th row of the projection matrix .

Iii-B Decoupling of Quadric Rotation and Axial Length

The rotation and quadric axial lengths are considered after the quadric centroid translation has been estimated independently. We assume that the ellipsoid of the object, such as an autonomous vehicle or robot, is under the constraint of yaw rotation, while the pitch and roll are constant at zero. This is satisfied for autonomous vehicles on the road in outdoor scenes. Therefore, we can replace the rotation matrix in Eq.1 by:

(3)

where, , , are elements of the quadric centroid translation vector.

We can simplify the linear form in [11] by using the landmark BBox observations and the corresponding dual quadric planes by substituting the .

(4)
(5)

The decoupled linear form of Eq.5

can be solved by singular value decomposition (SVD)

[11], where is the remaining elements of the dual quadric to be estimated.

Finally, the 9-D vector of the quadric with orientation, translation and axial lengths of the ellipsoid can be obtained by the estimated dual quadric matrix :

(6)

where,

(7)

Iv 3D Object Observation Constraints Optimization

In the local mapping thread, we optimize the quadrics by using odometry factors and landmark factors combined with the observation of local keyframes. We define the set of detected objects as , and the set of mapped objects as . By minimizing the observation error between observed instances and associated mapped instance , of the quadric can be optimized with the following constraint:

(8)

The Huber kernel

is used to enhance the robustness of outlier observations, and the

algorithm is used to optimize the target cost function.

Iv-a The 2D detection error

The 2D detection error is used to calculate the distance error between the 2D object BBox and the detected BBox in the keyframe. Detection results near the edge of the image are ignored in order to eliminate the effect of occlusion.

(9)

Iv-B Prior axial length error

The prior axial length error is calculated by the distance between the prior axial length and the object quadric axial length with the same object class.

(10)

Iv-C Texture plane error

Similar to the method proposed by [12], the texture plane error is obtained by the minimum distance between the fitted texture plane and the quadric landmark. The plane parameters of the texture plane is obtained by Delauney Triangulation of the object’s map points with the normal vector and plane distance of a texture plane . The texture plane distance error can be calculated as:

(11)

V The Object Data Association algorithm

Multi-view geometry information is used for object landmark initialization, while the object detection results are obtained by the single-frame image. Therefore, it is necessary to correctly associate the detected instance of the same object within the map. We propose the ODA algorithm to integrate information for data association. The Hungarian algorithm [9] is used to complete the assignment with the minimum distance error. Three different distance metrics are used for affinity functions to obtain , which is the element of the cost matrix . The , and parameters are experimentally set to 0.8, 1, and 0.8 respectively.

(12)

V-a Semantic Inliers Points Distance

To overcome the overlap of the detection masks, we use Bi-directional Optical Flow (BODF) to track the keypoints within the detection mask from the last keyframe and obtain the keypoints set . We calculate the ratio of inliers corresponding to the same object class, where calculates the element numbers of the set:

(13)

V-B Intersection of Union Distance

To calculate the intersection of union distance, we use the intersection ratio between the 2D quadric landmark projection BBox of and the 2D detection result of the object instance .

(14)

V-C Prior Object Size Distance

For each object instance

, the motion prediction method based on the Kalman filter

[1] is adopted to predict the state of the detection in the image frame. The predicted 2D BBox of is denoted as , the prior object size distance is defined by:

(15)

Vi Experiments

The proposed system consists of two modules, including the SLAM module and the detection module. The overall system architecture is described in Fig.2. In order to evaluate the performance of our proposed method, we build an experimental simulation environment based on OpenGL to compare the robustness and accuracy against other state-of-the-art techniques. The KITTI Raw Data dataset[6] is adopted as the benchmark real-world dataset to demonstrate the effectiveness of our method in outdoor scenes. All the experiments are conducted using an Intel(R) Core(TM) i7-9750H CPU@2.6GHZ, 16G memory, and Nvidia GTX 1080 Ti.

We define the following criteria for evaluation:

(1) : The intersection ratio between the ground truth (GT) and the estimated quadric projection detection.

(16)

(2) : The error of quadric centroid translation between the GT ellipsoid and the prediction estimation, indicating the accuracy of the ellipsoid position estimation.

(17)

(3) : The error of ellipsoid axial length between the GT ellipsoid and the predict estimation in the world coordinate, indicating the accuracy of the object shape estimation.

(18)

Vi-a Quantitative Evaluation of Simulation

Simulation provides GT of object positions and it is easy to test the robustness of the methods with different types of disturbance. We create the synthetic dataset with OpenGL, five cameras are evenly deployed within circular arcs to simulate the camera observation in the outdoor environment. An ellipsoid with varying shape and yaw rotation is deployed, the GT 2D object BBox and the position are provided. The yaw rotation of the ellipsoid is randomly sampled in the range of to simulate objects with rotation. To avoid the influence of random errors on the experimental results, for each type of noise, 10 ellipsoids are generated with Gaussian noise from 10 seeds resulting in a total of 100 trials.

To test the effect of different types of noise on the quadric initialization method, the relative camera poses are obtained by introducing zero-mean Gaussian noise with standard deviations in the range

to simulate the trajectory error. In addition, a detection BBox is simulated by adding the zero-mean Gaussian noise of to the GT.

We compare methods of quadric initialization including (a) Nicholson et al [11] denoted as Q-SLAM, (b) Rubino et al [15] denoted as Conic-method, (c) the proposed method with only decoupling of the quadric central translation, denoted as Tri, and (d) the proposed initialization method denoted as Tri+Yaw.

Quantitative evaluation results of initialization methods with different types of noise are visualized in Fig.3. The plots show the trend of different evaluation criteria with the increase in noise. It can be seen from Fig.3, the results for all methods are consistent with the GT, demonstrating correctness of all methods with zero noise. The performance of all methods degrades when noise increases.

It is obvious that the Q-SLAM method is the most sensitive to noise among all the techniques. When either the translation noise reaches 15%, the rotation noise reaches 20%, or the detection BBox noise reaches 2%, Q-SLAM fails to construct ellipsoids.

Meanwhile, the Conic-method maintains relatively good results which show the robustness under the effect of translation and rotation noise. On the other hand, it can be seen that under the influence of detection BBox noise, the and of the Conic-method also increase rapidly. When the detection BBox error exceeds 4%, the Conic-method fails to initialize the ellipsoids, which indicates that the Conic-method is also sensitive to detection noise. However, the performance of the proposed method is stable as it can be seen that with translation and rotation noise, the error remains stable with the maximum axial error of 0.45m and maximum translation error of 0.89m. These metrics are also influenced within a small range by the detection BBox error with the maximum axial error and translation error of 1.02m and 2.10m. These results show that our proposed method significantly improves the robustness of initialization with the minimal growth trend of noise. The visualization results of quadric initialization are shown in Fig.4, where the red ellipsoid is the GT, and green ellipsoid is the estimation. Our proposed method outperforms all compared methods.

Fig. 3: The initialization performance of methods to different types of noise, the curves show the trend of different criteria with the increase of noise.
Sequence Ours Conic[15] Q-SLAM[11]
09 0.6912 0.4468 0.2706
22 0.6512 0.3333 0.2923
23 0.6230 0.3949 0.2829
36 0.6047 0.4096 0.3514
59 0.5625 0.3556 0.2558
93 0.4815 0.2826 0.3617
Average 0.6023 0.3705 0.3024
TABLE I: Success rate comparison in KITTI Raw Data dataset.
Sequence Ours Conic[15] Q-SLAM[11]
09 0.7335 0.7252 0.7031
22 0.7629 0.7791 0.7662
23 0.7509 0.7529 0.6959
36 0.7604 0.7558 0.7127
59 0.6508 0.6878 0.6500
93 0.7232 0.6751 0.7433
Average 0.7303 0.7293 0.7119
TABLE II: comparison in KITTI Raw Data Dataset.

Vi-B Evaluation on KITTI Raw Data Dataset

To evaluate the performance of the proposed method in outdoor environments, we select the KITTI Raw Data dataset [6] in particular the sequences -09, -22, -23, -36, -59, and -93, which were recorded in urban and residential areas with vehicles. The dataset provides GT for vehicles, including 3-DoF object size and 6-DoF object pose. With the extrinsic parameters of sensors, we can transform the object pose to camera coordinates.

Table I shows the success rate of initialization and ellipsoid construction by different methods using different sequences. Tables II, III and IV show the experimental results of successfully constructed ellipsoids under different evaluation criteria.

From Table I, we can see that our method constructs ellipsoids for 60.2% of the vehicles and reaches an increase of 62.6% (from 37.0% to 60.23%) and 99.2% (from 30.24% to 60.23%) in success rate compared with the Conic-method and the Q-SLAM, respectively, thus confirming the effectiveness of our initialization method. For the metric, larger values indicate better construction results. As can be seen from Table II, our method outperforms the other existing methods with respect to in sequence -09 and -36, with the overall best average of 73.03%. The compared methods give better results for individual sequences because they discard some detection results that fail to be initialized. For and , smaller values indicate better construction results. As can be seen from Table III and Table IV, our method outperforms the compared methods in all cases except for sequence-22, with the average ellipsoid central translation error of 2.127m, nearly 52.2% reduction in error. In addition, our average axial length error is 0.642 m, a 50.8% reduction in error, compared with 1.369 and 0.947 for the other techniques. These experimental results show the robustness and effectiveness of the proposed method for ellipsoid representations in outdoor scenes.

Finally, we show the constructed object maps in Fig.1. The yellow ellipsoids in the map represent static vehicles and the yellow quadrics illustrate the orientation and shape of the estimated ellipsoids when projected onto the image frame. The magenta lines show the center of the ellipsoids in previous frames projected onto the current image frame, demonstrating the accuracy of the ODA algorithm. The red BBox represents the vehicles that are detected as dynamic objects and are not contained in the map.

Sequence Ours Conic[15] Q-SLAM[11]
09 2.5456 2.7819 3.6110
22 2.1769 1.9552 1.9651
23 2.3341 5.6605 8.7088
36 1.8594 2.7175 6.9814
59 1.3276 1.6883 1.3874
93 2.5226 5.4130 4.0654
Average 2.1277 3.3694 4.4532
TABLE III: Translation error comparison in KITTI Raw Data Dataset.

Vii Conclusion

In this work, a novel pipeline of real-time object-oriented stereo visual SLAM with 3D quadric landmarks is presented. A quadric initialization method based on the DQP algorithm is proposed to improve the robustness and success rate of ellipsoid construction. The data association is solved by the ODA algorithm which ensures highly accurate object pose estimation. Extensive experiments are conducted to show that the proposed system is accurate and robust to observation noise and significantly outperforms other methods in an outdoor environment.

In further work, we will explore finding the semantic relationships between object ellipsoids, and using the semantic information of the object map to localize and perform re-localization.

Sequence Ours Conic[15] Q-SLAM[11]
09 0.6271 1.2618 1.1400
22 0.5565 0.7233 0.8356
23 0.5494 1.7886 0.6837
36 0.7121 1.2797 0.8799
59 0.6706 1.8467 1.2908
93 0.7357 1.3156 0.8574
Average 0.6419 1.3693 0.9479
TABLE IV: Axial length error comparison in KITTI Raw Data Dataset.
Fig. 4: The visualization results of initialization performance of methods to different types of noise. Our method can initialize ellipsoid accurately and robustly.

References

  • [1] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft (2016) Simple online and realtime tracking. In 2016 IEEE International Conference on Image Processing (ICIP), Cited by: §V-C.
  • [2] D. Bolya, C. Zhou, F. Xiao, and Y. J. Lee (2019) Yolact: real-time instance segmentation. In

    Proceedings of the IEEE/CVF International Conference on Computer Vision

    ,
    pp. 9157–9166. Cited by: §I.
  • [3] C. Campos, R. Elvira, J. Rodríguez, J. Montiel, and J. Tardós (2020)

    ORB-slam3: an accurate open-source library for visual, visual-inertial and multi-map slam

    .
    Cited by: §II-B.
  • [4] G. Cross and A. Zisserman (1998) Quadric reconstruction from dual-space geometry. In Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), pp. 25–31. Cited by: §I.
  • [5] V. Gaudillière, G. Simon, and M. O. Berger (2020) Perspective-2-ellipsoid: bridging the gap between object detections and 6-dof camera pose. IEEE Robotics and Automation Letters 5 (4), pp. 5189–5196. Cited by: §I, §I.
  • [6] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. International Journal of Robotics Research (IJRR). Cited by: §I, §VI-B, §VI.
  • [7] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969. Cited by: §I.
  • [8] M. Hosseinzadeh, K. Li, Y. Latif, and I. Reid (2019) Real-time monocular object-model aware sparse slam. In 2019 International Conference on Robotics and Automation (ICRA), pp. 7123–7129. Cited by: §I, §I.
  • [9] H. W. Kuhn (1955) The hungarian method for the assignment problem. Naval research logistics quarterly 2 (1-2), pp. 83–97. Cited by: §I, §V.
  • [10] Z. Liao, W. Wang, X. Qi, X. Zhang, and R. Wei (2020) Object-oriented slam using quadrics and symmetry properties for indoor environments. Cited by: §I, §I, §I.
  • [11] L. Nicholson, M. Milford, and N. Sünderhauf (2018) Quadricslam: dual quadrics from object detections as landmarks in object-oriented slam. IEEE Robotics and Automation Letters 4 (1), pp. 1–8. Cited by: §I, §I, §I, §I, §II-A, §III-A, §III-B, §III-B, §VI-A, TABLE I, TABLE II, TABLE III, TABLE IV.
  • [12] K. Ok, K. Liu, K. Frey, J. P. How, and N. Roy (2019) Robust object-based slam for high-speed autonomous navigation. In 2019 International Conference on Robotics and Automation (ICRA), pp. 669–675. Cited by: §I, §I, §IV-C.
  • [13] Z. Qian, Θ. Patath, J. Fu, and J. Xiao (2021) Semantic slam with autonomous object-level data association. In 2021 IEEE international conference on robotics and automation (ICRA), Cited by: §I.
  • [14] J. Redmon and A. Farhadi (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767. Cited by: §I, §II-B.
  • [15] C. Rubino, M. Crocco, and A. Del Bue (2017) 3d object localisation from multi-view image detections. IEEE transactions on pattern analysis and machine intelligence 40 (6), pp. 1281–1294. Cited by: §I, §VI-A, TABLE I, TABLE II, TABLE III, TABLE IV.
  • [16] R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. Kelly, and A. J. Davison (2013) Slam++: simultaneous localisation and mapping at the level of objects. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 1352–1359. Cited by: §I.
  • [17] R. Tian, Y. Zhang, D. Zhu, S. Liang, S. Colman, and D. Kerr (2021) Accurate and robust scale recovery for monocular visual odometry based on plane geometry. In 2021 IEEE international conference on robotics and automation (ICRA), Cited by: §II-B.
  • [18] Y. Wu, Y. Zhang, D. Zhu, Y. Feng, S. Coleman, and D. Kerr (2020) EAO-slam: monocular semi-dense object slam based on ensemble data association. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4966–4973. Cited by: §I, §I.
  • [19] S. Yang and S. Scherer (2018) Monocular object and plane slam in structured environments. Cited by: §I, §I.
  • [20] S. Yang and S. Scherer (2019) Cubeslam: monocular 3-d object slam. IEEE Transactions on Robotics 35 (4), pp. 925–938. Cited by: §I, §I.