CAPRICORN: Communication Aware Place Recognition using Interpretable Constellations of Objects in Robot Networks

10/19/2019 ∙ by Benjamin Ramtoula, et al. ∙ 0

Using multiple robots for exploring and mapping environments can provide improved robustness and performance, but it can be difficult to implement. In particular, limited communication bandwidth is a considerable constraint when a robot needs to determine if it has visited a location that was previously explored by another robot, as it requires for robots to share descriptions of places they have visited. One way to compress this description is to use constellations, groups of 3D points that correspond to the estimate of a set of relative object positions. Constellations maintain the same pattern from different viewpoints and can be robust to illumination changes or dynamic elements. We present a method to extract from these constellations compact spatial and semantic descriptors of the objects in a scene. We use this representation in a 2-step decentralized loop closure verification: first, we distribute the compact semantic descriptors to determine which other robots might have seen scenes with similar objects; then we query matching robots with the full constellation to validate the match using geometric information. The proposed method requires less memory, is more interpretable than global image descriptors, and could be useful for other tasks and interactions with the environment. We validate our system's performance on a TUM RGB-D SLAM sequence and show its benefits in terms of bandwidth requirements.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Simultaneous Localization and Mapping (SLAM) has applications in a variety of scenarios: from aerial robots in GPS-denied environments to vacuum cleaners and self-driving cars, supporting many different types of missions. In some of these missions, a multi-robot SLAM system can improve performance, robustness to individual robot failures, and allow the use of heterogeneous robots with different means of movements and sensors [39].

Multi-robot SLAM generally follows the same process as single-robot SLAM: each robot uses sensor measurements to perform odometry and has a place recognition system to close loops and correct odometry drift. The robots estimate global trajectories and generate maps solving an optimization problem matching measurements and loop closures.

Robots perform odometry (the first step) locally and they are not affected by any other robot collaborating for the SLAM task. Odometry can, therefore, follow a method from the mature field of single-robot SLAM [41]. However, during place recognition and optimization, the robots need to consider information gathered from other robots working in parallel to take advantage of the multi-robot system and build a global map. For these steps, major adaptations of single robot SLAM techniques are necessary to cope with challenges and constraints inherent to multi-robot systems [39].

There are two extremes when considering sharing information among robots: a centralized system, where a single entity communicates with every robot performing operations as well as giving feedback [13, 38, 30]; and a decentralized system, in which robots communicate directly. Centralized systems rely on the central entity to always be functional and scale in computation and bandwidth with the number of robots. On the other hand, decentralized systems do not rely on a single entity and avoid the bottlenecks of centralized systems [6].

Fig. 1: Size of a place recognition query over the full team depending on number of robots composing it. “Broadcast constellation” corresponds to sending the full constellation to all robots. 1-hop communication between all robots is considered here. Our solution also scales better if more hops are required to maintain full connectivity.
Fig. 2: Illustration of our full system, in a simplified case with 4 robots. In this example, the detector can predict 8 classes, and R4 is making a query. (a) R4 creates a constellation. (b) R4 generates an associated semantic descriptor. (c) R4 splits and distributes the descriptor among robots responsible for classes seen: R1 (book), and R2 (bottle and chair). (d) R1 and R2 save the sub-descriptors, compare them against all previously received sub-descriptors, and respond with match candidates: R1 responds that R3 at frame 1728 had sent a similar sub-descriptor. (e) R4 sends its full constellation to R3 since it matches best semantically. (f) R3 performs data association between constellations using our geometric surroundings descriptors and produces a final loop closure score. Details of the process including descriptors and comparisons are presented in Section III.

In this paper, we present a method for Communication Aware Place Recognition using Interpretable Constellations of Objects in Robot Networks (CAPRICORN). We propose a method to extract and compare compact spatial and semantic descriptors for place recognition using constellations, a representation of the environment which is particularly well suited for the constraints of multi-robot systems. We also present a mechanism to decentralize it efficiently (Fig. 1).

Based on previous ideas [12, 15], and considering the recent advances in real-time object detection methods, RGB-D sensors and embedded AI systems, we build 3D constellations of points to represent scenes, where each point approximates the position of an object. The constellation is human understandable and capable of supporting other tasks requiring interaction with the environment. Its shape and scale are robust to viewpoint changes. Since a constellation builds on an object detector module, it can inherit robustness properties which can be associated with the detector, such as invariance to illumination changes. Our method benefits from the fact that object representations require significantly less memory and bandwidth than other representations based on visual features or point clouds [4].

Constellations allow us to perform a decentralized loop closure check in 2 steps. To find loop closures, we first use a semantic descriptor of a constellation which is extremely compact and contains only information concerning object labels. Inspired by the method in [7], each robot shares the descriptor with all other robots, each of which potentially finds similar scenes based on specific object labels, and returns candidate scene matches. From these results, the querying robot sends its constellation to the potential matching robots. These, in turn, compute an overall score considering both “which” semantic elements are two frames (labels), and “how” they are organized in space (relative distances), allowing accurate loop closure detection. The process is illustrated in Fig. 2 and detailed in Section III, while experiments and final results are presented in Sections IV and V, respectively. The source code is provided on our repository: https://github.com/MISTLab/CAPRICORN, and a brief summary of this work is presented in [34].

Ii Related Work

Ii-a Visual Place Recognition

A major part of a visual place recognition module is the description of a place [27]. Techniques using solely visual features, either global [33, 46] or local [26, 2], that can be quantized with bag-of-words [10, 42]

are most commonly used. However, an important challenge is for the descriptors to exhibit robustness to extreme viewpoint changes as well as dynamic changes in the environment. These changes can severely affect the visual appearance of an image, making methods relying only on visual features fail. Advances in deep learning, specifically convolutional neural networks (CNNs) 

[21], allow to extract more robust and high-level features from images [35, 25, 37, 24]. Recent works propose various methods to make use of CNNs to improve the place recognition performance. Some of these [3, 16, 17, 45, 31] make use of learned feature maps of CNNs trained for other tasks such as place categorization or object proposals, which can be robust to viewpoint and condition changes [44]. NetVLAD [1], is a CNN architecture aimed at creating a global descriptor based on an image, trained end-to-end for place recognition. While these methods improve robustness, they lack interpretability and do not consider computation and communication constraints, which can be significant in a multi-robot system.

Ii-B Semantic entities for mapping and place recognition

Using semantic entities in the environment for mapping has been gaining popularity in recent years. SLAM++ is an early work which proposed a full SLAM solution based on a set of predefined objects whose 3D models are available [40].

Following works avoided the need for object models by using discovery from point cloud data [5], 2D object detections to create 3D models on the go [28], or by fitting 3D quadrics to objects [32, 19, 20]. These SLAM solutions require precise estimates of the objects’ poses for loop closures.

Several solutions consider loop closures using graphs or constellations of entities. Finman et al. detect objects using primitive kernels to build a graph [11], from which they perform place recognition. Gawel et al. propose to build a graph from semantically segmented images for multi-view localization [18]. Frey et al. consider constellations of landmarks which can be matched geometrically to perform data association and map merging in semantic SLAM [15].

Ii-C Decentralized place recognition

The problem of decentralized place recognition requires a mechanism to share descriptions efficiently. When full connectivity cannot be assumed, some methods rely on sharing information only to neighboring robots [9, 29, 14], to deduce global associations. For fully connected teams, Cieslewski and Scaramuzza proposed a way to share a bag-of-words descriptor that is split among every robot such that the amount of data exchanged for a place recognition query is independent of the number of robots [7]. They extended their work using NetVLAD [1] as a global descriptor, with a mechanism allowing each descriptor to be sent only to one specific robot [8]. These solutions usually make use of descriptors which are hardly interpretable and not reusable for other tasks.

Iii Methodology

Our decentralized place recognition method (Fig. 2) relies on the creation of a spatially and semantically meaningful descriptor (a 3D constellation of objects) associated with each frame coming from an on-board RGB-D camera. Similarly to the work that inspired us using visual descriptors[7], the sharing process requires to pre-assign all possible classes of detectable objects among the robots in the system.

Overall, for each RGB-D frame, we execute the following steps, as illustrated in Fig. 2:

  1. A querying robot QR creates a constellation (III-A)

  2. QR generates an associated semantic descriptor (III-B)

  3. QR distributes the descriptors among a subset of the other robots, based on the objects in the scene (III-C)

  4. The receiving robots save the descriptors, compute their scores against all previously received descriptors, and respond with match candidates (III-C)

  5. QR sends its full constellation to the robots with the closest-matching global semantic descriptors (III-D)

  6. The receiving robots selected in step (e) perform data association between the constellations (III-E) and combine the results with global semantic descriptors to make a place recognition decision (III-F)

Iii-a 3D Constellations of Objects

Our system consists of a robot set , where each robot has registered a set of frames . Each frame contains 3 color channels (RGB) and one depth channel: .

From each frame , an object-to-point detection system detects visible objects and associates them to a 3D point expressed in the camera frame. The implementation of this system is left as a design choice depending on types of environments, computing power, and algorithms available.

Each point generated from a detected object in frame of robot is associated with a label taken within , the set of all detectable indexes which depends on the object-to-point semantic segmentation system. A point also has a 3D position . We define an object as . Therefore, for a frame seen by robot we obtain the constellation .

Since the object-to-point detection system is exchangeable, the proposed approach is straightforward to adapt to new environments with different classes of objects, or simply to improve as state-of-the-art detection tools advance.

Iii-B Semantic descriptor

We design a semantic descriptor that does not use the 3D information of the constellations to check for semantic consistency. It allows to compress information and to simplify how information is split among robots. This simplified descriptor can be seen as a histogram in which each bin corresponds to a possible class in . A bin’s value is the number of instances of the class in the scene. For constellation , the semantic descriptor has a value in the bin corresponding to class index given by such that . This semantic descriptor is very sparse since most classes which can be detected in standard object detectors usually do not appear in the same scene.

Iii-C Distributed semantics

To check for similar places seen by other robots in an efficient, scalable, and decentralized way, we use a technique inspired by previous work [7]. The idea is to split the global descriptor into partial descriptors which only consider specific classes, and assign them to different robots.

During a query of frame from robot , each robot , with preassigned label indexes , is responsible for the partial descriptor . Robot only receives non-zero values of the partial descriptors to reduce communication bandwidth, benefiting from the sparsity of the semantic descriptor. The receiving robots store the received partial descriptor and compare it to all previously received ones. The comparison of two partial descriptors of and assigned to robot is performed using:

(1)

where

corresponds to the well-known Jaccard index of the descriptors, also known as Intersection over Union (IOU), commonly used in object detection

[36].

Each robot then finds frames producing the highest scores and returns the corresponding frame and robot ID to the querying robot : . The number of highest scoring frames returned can be adjusted to the available bandwidth and number of robots. Note that the querying robot is also assigned a subset of labels, and has registered previous queries related to those labels. Hence, it also needs to compare these to its own current query, which requires no external communication.

Iii-D Deciding on the best global semantic matches

The querying robot receives robot and frame IDs from all robots detecting partial semantic matches. must decide which robots to further query for a semantic and geometric comparison of the constellations. Similarly to the geometric check step used in previous works [8, 7]

, we choose to send the full constellation to the robots which provide the highest number of frames returned from the highest scoring partial semantic vectors, as these are more likely to have observed similar scenes. The robots receiving the full constellation compare it to their own recorded constellations.

Again, the number of robots that should be further queried can be controlled and adjusted to the available bandwidth and the number of robots.

Iii-E Data association

Once a robot receives a querying constellation from another robot , it compares it to every constellation it has previously recorded. The first step when comparing constellations and is to perform data association: the process of matching points of one constellation to points in another. In other terms, matching specific objects detected in one frame to objects detected in another.

Labels appearing in both constellations are defined as . For each object with labels in , we define a geometric surroundings descriptor . Elements in this vector are given the value if they correspond to the index of a label not appearing in both constellations. Otherwise, elements are given by the Euclidean distance to the closest instance of the corresponding label in the constellation:

(2)

This vector represents how far the closest elements of each appearing label are from a given object. We use this vector to find the most likely matches between and , that is finding points with the same label and the closest vectors:

(3)

We then keep the match between the object of index in and the object of index of only if they both match each other and their vectors are sufficiently close according to a threshold :

(4)

The value of depends on the estimation noise when reducing an object to a point.

must be high enough so that the noise does not prevent matches, but small enough to reject incorrect matches. It can be tuned with prior experiments by measuring the variance of the estimated point position associated with an object when the viewpoint changes.

Our proposed method combines semantic information and geometric information while being independent of the reference frame from which the 3D positions of points are expressed. It can detect valid matches and filter out incorrect ones while not requiring additional descriptors associated with each element of the constellations.

Iii-F Loop closure detection

The matching process (Section III-E) acts as a geometric filter on valid semantic matches: only items of the same class and with similar geometries of surrounding objects are matched. We use this to evaluate a score between two constellations. To obtain a full comparison of two constellations, we compute the Jaccard index over the full semantic descriptors of both constellations (Eq. 1, but with calculated using and ). This provides a semantic comparison to which we multiply , the fraction of objects successfully matched over the number of common objects in the two constellations. Using this measure, for a place to be recognized we need two constellations to demonstrate similar semantic entities resulting in a high , but also for those entities to have similar geometric relationships to produce a high .

(5)
(6)
(7)

Iv Experiments

Iv-a Dataset used and ground-truth

To ensure that our method is able to perform place recognition while also being data-efficient, we test with it using the freiburg3_long_office_household sequence of the TUM RGB-D SLAM dataset [43]. To the best of our knowledge, no other datasets provide RGB-D data of environments containing objects and loop closures. As a proof-of-concept, our experiments are based on one sequence since loop closures had to be manually annotated.

The freiburg3_long_office_household sequence consists of 2585 frames recorded with a handheld camera circling around two back-to-back desks with a variety of different objects on them. The alignment of the reference frame with the desks allowed us to combine ground-truth pose of the camera to manual labeling in order to obtain a ground-truth of recognized places. We use the pose of the camera and the depth values to estimate the position of the scene observed in each frame. We can then use these scene positions to estimate distances between scenes observed in each pair of frames, and we obtain a ground-truth loop closure score based on these distances. We transform the distances to obtain scores between , when scenes are very far, and , when the scenes’ positions overlap. As consecutive frames can easily be detected as loop closures, we ignore closures from frames within 12 seconds (200 frames).

Iv-B Choices specific to our experiment

Iv-B1 Object-to-point detection system

Given a frame , the channels are given to the real-time object detector YOLOv3 [36] with a zero threshold to detect as many objects as possible. We use the weights supplied by the authors obtained from training on the COCO dataset [23], which can detect objects: . Using the depth channel , we compute the median 3D positions of points within the detected bounding boxes of objects to estimate a point associated to each object.

Iv-B2 Preassigning labels

In this work we use a straightforward approach to assign labels to different robots. Each robot is responsible for consecutive label indexes so that all robots have the same number of labels. We leave the study of more complex assignment strategies for future works.

Iv-B3 Parameter choice

Based on our experiments, we allow each robot to return up to the highest matching frames when performing the partial semantic comparison (Section III-C), as well as sending its full constellation to up to robots for the geometric and semantic check (Section III-F). This presented a good balance between performance and communication. We used a value of m for to approve a match while performing data association.

Iv-C Evaluation

We simulate our method in a computer program processing all the frames of the freiburg3_long_office_household sequence. The program simulates all the communications which would appear between robots as well as the computations happening on each robot. It outputs the final loop closure scores for each pair of frames. We perform the evaluation of our system in centralized and decentralized cases to verify that our decentralized mechanism does not compromise performance.

In the centralized system, every constellation obtained can be compared to every other constellation registered during the previous frames of the sequence, and only the full comparison of two constellations is performed (III-F).

For the decentralized case, we use a similar approach as [8] by splitting the sequence into sub-trajectories, each assigned to a different robot. An example of a split sequence is shown in Fig. 3. Moreover, instead of using the method on multiple processes, we simulate the data each robot would have access to and would communicate with. We evaluate both performance and data exchanged between robots. The data used by our representation is shown in Table I.

Fig. 3: Example of the freiburg3_long_office_household trajectory split between 10 robots. Red lines indicate detected loop closures when the camera was facing the same area.
Value in the semantic descriptor
Label index
Distributed semantic query number of classes seen
Robot index
Frame index
Returned robots and frame id
Position value (x,y or z)
Full check query (constellation) number of objects
TABLE I: Data required for our representations in bytes. Messages which are communicated are shown in bold font.

V Results

Fig. 4: Similarity matrices obtained for the centralized solution and decentralized solution with 10 robots. The diagonals are empty due to the removal of neighboring frames during the search. Points show the pairs producing the best loop closure score. Loop closure scores below were removed.

The similarity matrices obtained in both centralized and decentralized cases are shown in Fig. 4. Overall, we can confirm that similar regions of frame pairs produce detected loop closures in both cases. The areas match what we expect from the sequence: the first desk is seen at the beginning with various points of views until frame 450, and then later after frame 1550. The second desk is visible, again with various points of view, between frames 450 and 1550, which also corresponds to a region with consistent matches. Most of the ground truth scores (Section IV-A) associated with the detected loop closures are close to , and concentrated in particular regions, which confirms their validity. Yet, some smaller regions of detected closures are associated to a lower ground truth score. Even though a high ground truth score is a good confirmation of the match, a lower score doesn’t necessarily mean that the matches are incorrect. A lower score means that estimated regions where the camera was facing in the two frames were considered to be far. However, we have no way of automatically and accurately checking if parts of two frames correspond to the same scene. It is possible that the camera was aimed at different areas which we have calculated to be relatively far, but that part of a same scene was visible. This could lead our system to recognize parts of constellations which would be visible in both frames, even though the ground truth score would be low.

Fig. 5: Precision recall obtained in both centralized and decentralized cases (10 robots), compared with NetVLAD.

The performance of our system is confirmed by looking at the precision-recall curves, shown in Fig. 5. Our centralized system performs very well, obtaining an area under curve (AUC) of . The decentralized mechanism slightly affects the performance, reducing the AUC by . This drop is close to the drop of due to the decentralized mechanism in the solution based on NetVLAD [8]. Additionally, we evaluate a centralized solution using the NetVLAD implementation from DSLAM [6], which performs slightly better than our decentralized solution but not as well as our centralized one. The best recall obtained from the decentralized solution is not as high as the one from the centralized solution. This can be explained as a cost of the decentralization mechanism. In fact, the mechanism will only perform full checks on pairs of frames which were selected on a series of partial semantic comparisons, whereas the centralized version will check every pair. As a result, it is possible that some potentially good matches are ignored in the decentralized case because they don’t stand out using the series of partial semantic comparisons. This situation can happen, for example, if two scenes contain common combinations of objects. The objects would be seen often and so the two scenes won’t produce particularly good partial semantic scores, and won’t be explored further. In the centralized case, those scenes are still checked completely and produce high loop closure scores when considering data association and the geometric score.

We also produce a comparison of data used for our decentralized method and previous works [8, 7], as well as the case where each robot broadcasts its constellation to every other robot in Fig. 1. For previous works, the query size is independent of the sequence used, hence we can estimate the amount of data based on the mechanisms proposed. It is clear that all methods making use of a decentralization mechanism do not consume more data as robots are added. Moreover, we achieve query sizes of around 490B, which is comparable to previous work using NetVLAD [8] requiring 512B. However, note that the close score between our solution and the one based on NetVLAD is due to the fact that in the latter, the descriptor is sent to only one robot. Considering cases in which robots may not be able to communicate directly to every other robot, multi-hop communication may be necessary. In that case, sending the NetVLAD descriptor of one frame using 512 bytes through several robots will quickly increase necessary bandwidth. In our solution, only very small messages are sent to every robot in the distributed semantic comparison and when performing the full comparison (Table I). In the cases where multi-hop communication is required, our solution will scale better with the number of hops required.

Fig. 6: Distribution of classes of detected objects during the full sequence with 10 robots. Bars of the same colors represent labels assigned to the same robot.

Fig. 6 shows that the classes observed in the sequence are unbalanced, and generate higher load for specific robots with our solution. For example, bottles (index 39) are more frequent in our sequence. However, this distribution is meaningful and interpretable: specific relations between objects are common in the environment. For example, keyboards are usually close to screens, and chairs to tables. This may make our method fragile against semantic perceptual aliasing, but recent robust optimization techniques help in that regard [22]. Moreover, certain areas may be associated with specific objects, as acknowledged by research on place categorization [47]

. We believe the use of knowledge concerning the distribution of objects in the environment can improve load balancing as future work. We would also like to underline that the use of constellations can, in theory, allow for relative pose estimation between two frames. This step is currently performed through visual features, which turns out to be the most bandwidth-consuming module of DSLAM 

[6]. Constellations have the potential to greatly improve this step. However, in our experiments, the accuracy and consistency of the object-to-point detector were not sufficient to produce relative pose estimates of satisfying precision.

Vi Conclusions

We present a new method to perform decentralized place recognition using 3D constellations of objects. These constellations have the advantages of being compact, meaningful, and inheriting robustness and invariance associated with object detectors. We verify that our solution maintains performance when used with a decentralized mechanism in a sequence of the TUM RGB-D SLAM dataset and that it matches state-of-the-art results in terms of required bandwidth.

References

  • [1] R. Arandjelović, P. Gronat, A. Torii, T. Pajdla, and J. Sivic (2016) NetVLAD: CNN architecture for weakly supervised place recognition. Cited by: §II-A, §II-C.
  • [2] H. Bay, T. Tuytelaars, and L. Van Gool (2006) SURF: Speeded Up Robust Features. In Computer Vision – ECCV 2006, A. Leonardis, H. Bischof, and A. Pinz (Eds.), Vol. 3951, pp. 404–417 (en). External Links: ISBN 978-3-540-33832-1 978-3-540-33833-8, Document Cited by: §II-A.
  • [3] Z. Chen, F. Maffra, I. Sa, and M. Chli (2017-09) Only look once, mining distinctive landmarks from ConvNet for visual place recognition. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, pp. 9–16 (en). External Links: ISBN 978-1-5386-2682-5, Document Cited by: §II-A.
  • [4] S. Choudhary, L. Carlone, C. Nieto, J. Rogers, Z. Liu, H. I. Christensen, and F. Dellaert (2017) Multi Robot Object-Based SLAM. In 2016 International Symposium on Experimental Robotics, D. Kulić, Y. Nakamura, O. Khatib, and G. Venture (Eds.), Vol. 1, pp. 729–741 (en). External Links: ISBN 978-3-319-50114-7 978-3-319-50115-4, Document Cited by: §I.
  • [5] S. Choudhary, A. J. Trevor, H. I. Christensen, and F. Dellaert (2014) SLAM with object discovery, modeling and mapping. In Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, pp. 1018–1025. Cited by: §II-B.
  • [6] T. Cieslewski, S. Choudhary, and D. Scaramuzza (2018-05) Data-Efficient Decentralized Visual SLAM. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2466–2473. External Links: Document Cited by: §I, §V, §V.
  • [7] T. Cieslewski and D. Scaramuzza (2017-04) Efficient Decentralized Visual Place Recognition Using a Distributed Inverted Index. IEEE Robotics and Automation Letters 2 (2), pp. 640–647 (en). External Links: ISSN 2377-3766, 2377-3774, Document Cited by: §I, §II-C, §III-C, §III-D, §III, §V.
  • [8] T. Cieslewski and D. Scaramuzza (2017-12) Efficient decentralized visual place recognition from full-image descriptors. In 2017 International Symposium on Multi-Robot and Multi-Agent Systems (MRS), Los Angeles, CA, pp. 78–82 (en). External Links: ISBN 978-1-5090-6309-3, Document Cited by: §II-C, §III-D, §IV-C, §V, §V.
  • [9] A. Cunningham, V. Indelman, and F. Dellaert (2013-05) DDF-SAM 2.0: Consistent distributed smoothing and mapping. In 2013 IEEE International Conference on Robotics and Automation, pp. 5220–5227. External Links: Document Cited by: §II-C.
  • [10] Fei-Fei Li and P. Perona (2005) A Bayesian Hierarchical Model for Learning Natural Scene Categories. In

    2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)

    ,
    Vol. 2, San Diego, CA, USA, pp. 524–531 (en). External Links: ISBN 978-0-7695-2372-9, Document Cited by: §II-A.
  • [11] R. Finman, T. Whelan, M. Kaess, and J. J. Leonard (2014-05) Efficient incremental map segmentation in dense RGB-D maps. In 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, pp. 5488–5494 (en). External Links: ISBN 978-1-4799-3685-4, Document Cited by: §II-B.
  • [12] R. Finman, T. Whelan, L. Paull, and J. Leonard (2014) Physical Words for Place Recognition in RGB-D Maps. In International Conference on Robotics and Automation Workshop on Place Recognition in Changing Environments, Cited by: §I.
  • [13] C. Forster, S. Lynen, L. Kneip, and D. Scaramuzza (2013-11) Collaborative monocular SLAM with multiple Micro Aerial Vehicles. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, pp. 3962–3970 (en). External Links: ISBN 978-1-4673-6358-7 978-1-4673-6357-0, Document Cited by: §I.
  • [14] A. Franchi, L. Freda, G. Oriolo, and M. Vendittelli (2009-04) The Sensor-based Random Graph Method for Cooperative Robot Exploration. IEEE/ASME Transactions on Mechatronics 14 (2), pp. 163–175. External Links: ISSN 1083-4435, Document Cited by: §II-C.
  • [15] K. M. Frey, T. J. Steiner, and J. P. How (2018-09) Efficient Constellation-Based Map-Merging for Semantic SLAM. arXiv:1809.09646 [cs] (en). Note: arXiv: 1809.09646Comment: Submitted to IEEE International Conference on Robotics and Automation (ICRA) 2019 Cited by: §I, §II-B.
  • [16] S. Garg, N. Suenderhauf, and M. Milford (2018-05) Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 3645–3652. External Links: Document Cited by: §II-A.
  • [17] S. Garg, N. Suenderhauf, and M. Milfort (2018) LoST? appearance-invariant place recognition for opposite viewpoints using visual semantics. Proceedings of Robotics: Science and Systems XIV. Cited by: §II-A.
  • [18] A. Gawel, C. D. Don, R. Siegwart, J. Nieto, and C. Cadena (2018-07) X-View: Graph-Based Semantic Multi-View Localization. IEEE Robotics and Automation Letters 3 (3), pp. 1687–1694 (en). External Links: ISSN 2377-3766, 2377-3774, Document Cited by: §II-B.
  • [19] M. Hosseinzadeh, Y. Latif, T. Pham, N. Suenderhauf, and I. Reid (2018-04) Structure Aware SLAM using Quadrics and Planes. arXiv:1804.09111 [cs]. Note: arXiv: 1804.09111Comment: Accepted to ACCV 2018 Cited by: §II-B.
  • [20] M. Hosseinzadeh, K. Li, Y. Latif, and I. Reid (2018-09) Real-Time Monocular Object-Model Aware Sparse SLAM. arXiv:1809.09149 [cs]. Note: arXiv: 1809.09149Comment: Submitted to ICRA 2019 (for video demo look at https://youtu.be/UMWXd4sHONw) Cited by: §II-B.
  • [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Eds.), pp. 1097–1105. Cited by: §II-A.
  • [22] P. Lajoie, S. Hu, G. Beltrame, and L. Carlone (2019-04) Modeling Perceptual Aliasing in SLAM via Discrete–Continuous Graphical Models. IEEE Robotics and Automation Letters 4 (2), pp. 1232–1239. External Links: ISSN 2377-3766, Document Cited by: §V.
  • [23] T. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollár (2014-05) Microsoft COCO: Common Objects in Context. arXiv:1405.0312 [cs]. Note: arXiv: 1405.0312Comment: 1) updated annotation pipeline description and figures; 2) added new section describing datasets splits; 3) updated author list Cited by: §IV-B1.
  • [24] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) SSD: Single Shot MultiBox Detector. In Computer Vision – ECCV 2016, Vol. 9905, pp. 21–37 (en). External Links: ISBN 978-3-319-46447-3 978-3-319-46448-0, Document Cited by: §II-A.
  • [25] J. Long, E. Shelhamer, and T. Darrell (2015-06) Fully convolutional networks for semantic segmentation. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, pp. 3431–3440 (en). External Links: ISBN 978-1-4673-6964-0, Document Cited by: §II-A.
  • [26] D.G. Lowe (1999) Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, pp. 1150–1157 vol.2 (en). External Links: ISBN 978-0-7695-0164-2, Document Cited by: §II-A.
  • [27] S. Lowry, N. Suenderhauf, P. Newman, J. J. Leonard, D. Cox, P. Corke, and M. J. Milford (2016-02) Visual Place Recognition: A Survey. IEEE Transactions on Robotics 32 (1), pp. 1–19 (en). External Links: ISSN 1552-3098, 1941-0468, Document Cited by: §II-A.
  • [28] J. Mccormac, R. Clark, M. Bloesch, A. Davison, and S. Leutenegger (2018-09) Fusion++: Volumetric Object-Level SLAM. In 2018 International Conference on 3D Vision (3DV), pp. 32–41. External Links: Document Cited by: §II-B.
  • [29] E. Montijano, R. Aragues, and C. Sagüés (2013-12) Distributed Data Association in Robotic Networks With Cameras and Limited Communications. IEEE Transactions on Robotics 29 (6), pp. 1408–1423. External Links: ISSN 1552-3098, Document Cited by: §II-C.
  • [30] J. G. Morrison, D. Gálvez-López, and G. Sibley (2016) MOARSLAM: Multiple Operator Augmented RSLAM. In Distributed Autonomous Robotic Systems, N. Chong and Y. Cho (Eds.), Vol. 112, pp. 119–132. External Links: ISBN 978-4-431-55877-4 978-4-431-55879-8, Document Cited by: §I.
  • [31] T. Naseer, G. L. Oliveira, T. Brox, and W. Burgard (2017-05) Semantics-aware visual localization under challenging perceptual conditions. In 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, Singapore, pp. 2614–2620 (en). External Links: ISBN 978-1-5090-4633-1, Document Cited by: §II-A.
  • [32] L. Nicholson, M. Milford, and N. Sünderhauf (2019-01) QuadricSLAM: Dual Quadrics From Object Detections as Landmarks in Object-Oriented SLAM. IEEE Robotics and Automation Letters 4 (1), pp. 1–8. External Links: ISSN 2377-3766, Document Cited by: §II-B.
  • [33] A. Oliva and A. Torralba (2006) Chapter 2 Building the gist of a scene: the role of global image features in recognition. In Progress in Brain Research, Vol. 155, pp. 23–36 (en). External Links: ISBN 978-0-444-51927-6, Document Cited by: §II-A.
  • [34] B. Ramtoula, R. De Azambuja, and G. Beltrame (2019-08) Data-efficient decentralized place recognition with 3d constellations of objects [extended abstract]. In 2019 International Symposium on Multi-Robot and Multi-Agent Systems (MRS), New Brunswick, NJ (en). Cited by: §I.
  • [35] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016-06) You Only Look Once: Unified, Real-Time Object Detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 779–788 (en). External Links: ISBN 978-1-4673-8851-1, Document Cited by: §II-A.
  • [36] J. Redmon and A. Farhadi (2018-04) YOLOv3: An Incremental Improvement. arXiv:1804.02767 [cs]. Note: arXiv: 1804.02767Comment: Tech Report Cited by: §III-C, §IV-B1.
  • [37] S. Ren, K. He, R. Girshick, and J. Sun (2017-06) Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (6), pp. 1137–1149 (en). External Links: ISSN 0162-8828, 2160-9292, Document Cited by: §II-A.
  • [38] L. Riazuelo, J. Civera, and J.M.M. Montiel (2014-04) C2TAM: A Cloud framework for cooperative tracking and mapping. Robotics and Autonomous Systems 62 (4), pp. 401–413 (en). External Links: ISSN 09218890, Document Cited by: §I.
  • [39] S. Saeedi, M. Trentini, M. Seto, and H. Li (2016-01) Multiple-Robot Simultaneous Localization and Mapping: A Review: Multiple-Robot Simultaneous Localization and Mapping. Journal of Field Robotics 33 (1), pp. 3–46 (en). External Links: ISSN 15564959, Document Cited by: §I, §I.
  • [40] R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. Kelly, and A. J. Davison (2013) Slam++: Simultaneous localisation and mapping at the level of objects. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 1352–1359. Cited by: §II-B.
  • [41] D. Scaramuzza and F. Fraundorfer (2011-12) Visual Odometry [Tutorial]. IEEE Robotics & Automation Magazine 18 (4), pp. 80–92 (en). External Links: ISSN 1070-9932, Document Cited by: §I.
  • [42] Sivic and Zisserman (2003) Video Google: a text retrieval approach to object matching in videos. In Proceedings Ninth IEEE International Conference on Computer Vision, Nice, France, pp. 1470–1477 vol.2 (en). External Links: ISBN 978-0-7695-1950-0, Document Cited by: §II-A.
  • [43] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers (2012-10) A benchmark for the evaluation of RGB-D SLAM systems. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, pp. 573–580 (en). External Links: ISBN 978-1-4673-1736-8 978-1-4673-1737-5 978-1-4673-1735-1, Document Cited by: §IV-A.
  • [44] N. Suenderhauf, S. Shirazi, F. Dayoub, B. Upcroft, and M. Milford (2015-09) On the performance of ConvNet features for place recognition. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4297–4304. External Links: Document Cited by: §II-A.
  • [45] N. Suenderhauf, S. Shirazi, A. Jacobson, F. Dayoub, E. Pepperell, B. Upcroft, and M. Milford (2015-07) Place Recognition with ConvNet Landmarks: Viewpoint-Robust, Condition-Robust, Training-Free. In Robotics: Science and Systems XI, (en). External Links: ISBN 978-0-9923747-1-6, Document Cited by: §II-A.
  • [46] I. Ulrich and I. Nourbakhsh (2000) Appearance-based place recognition for topological localization. In Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), Vol. 2, San Francisco, CA, USA, pp. 1023–1029 (en). External Links: ISBN 978-0-7803-5886-7, Document Cited by: §II-A.
  • [47] J. Wu, H. I. Christensen, and J. M. Rehg (2009) Visual Place Categorization: Problem, Dataset, and Algorithm. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS’09, Piscataway, NJ, USA, pp. 4763–4770. Note: event-place: St. Louis, MO, USA External Links: ISBN 978-1-4244-3803-7 Cited by: §V.