I Introduction
Using several robots instead of one can accelerate many tasks such as exploration and mapping, or enable heterogenous teams of robots, where each robot has a specialization. Multirobot simultaneous localization and mapping (SLAM) is an essential component of any team of robots operating in an absolute positioning system denied environment, as it relates the current state estimate of each robot to all present and past state estimates of all robots. Because cameras are cheap, lightweight and versatile sensors, we seek to implement a visual SLAM system. Visual MultiRobot SLAM can be solved in a centralized manner, where a single entity collects all data and solves SLAM for all robots, but that relies on a central entity to always be reachable, to never fail and to scale to the size of the robot team, both in computation and bandwidth. Decentralized systems do not have this bottleneck, but are more challenging to implement.
Visual SLAM systems typically consist of three components: 1) a visual odometry algorithm which provides an initial state estimate, 2) a place recognition system which is able to relate the currently observed scene to previously observed scenes, and 3) an optimization backend which consistently integrates the place matches from the place recognition system with the full stream of state estimates. The end product is a map, and that map feeds back to place recognition and optimization. It is this feedback which makes decentralized SLAM challenging, especially if one is concerned about communication bandwidth. Visual odometry is not involved in the feedback loop and is thus trivial to distribute – it can be run on each robot independently and feed its output to the rest of the decentralized SLAM system.
In previous work we have proposed stateofthe art decentralized place recognition [1] and optimization [2] systems separately. Both systems focus on data efficiency: they achieve performance similar to a centralized system while minimizing the data that needs to be exchanged. In this work, we integrate both systems along with a dataefficient method for visual feature association [3] into a full decentralized visual SLAM system. The system is evaluated on publically available data and the code is provided opensource.
Ii Related Work
In the following, we independently consider related work pertaining to decentralized optimization and to decentralized place recognition. We conclude with a review of existing integrated decentralized SLAM systems.
Iia Decentralized Optimization
Distributed estimation in multirobot systems is an active field of research, with special attention being paid to communication constraints [4], heterogeneity [5, 6], consistency [7], and robust data association [8]
. The literature offers distributed implementations of different estimation techniques, including Kalman filters
[9], information filters [10], particle filters [11, 12], and distributed smoothers [13, 2]In multirobot systems, maximumlikelihood trajectory estimation can be performed by collecting all measurements at a centralized inference engine, which performs the optimization [14, 15, 5, 16, 8]. However, it is not practical to collect all measurements at a single inference engine since it requires a large communication bandwidth. Furthermore, solving trajectory estimation over a large team of robots can be too demanding for a single computational unit.
These reasons triggered interest towards distributed trajectory estimation, in which the robots only exploit local communication, in order to reach a consensus on the trajectory estimate [17, 18, 19, 20, 21]. Recently, Cunnigham et al. [21, 13] used Gaussian elimination, and developed an approach, called DDFSAM, in which robots exchange Gaussian marginals over the separators (i.e., the variables observed by multiple robots).
While Gaussian elimination has become a popular approach, it has two major shortcomings. First, the marginals to be exchanged among the robots are dense, hence the communication cost is quadratic in the number of separators. This motivated the use of sparsification techniques [4]. Second, Gaussian elimination is performed on a linearized version of the problem, hence these approaches require good linearization points and complex bookkeeping to ensure consistent linearization across the robots [13]. The need of a linearization point also characterizes gradientbased techniques [20]. An alternative approach to Gaussian elimination is the GaussSeidel approach of Choudhary et al. [2], which requires communication linear in the number of separators. The Distributed GaussSeidel approach only requires the latest estimates for poses involved in interrobot measurement to be communicated and therefore does not require consistent linearization and complex bookkeeping. This approach also avoids information double counting since it does not communicate marginals.
IiB Decentralized Place Recognition and Pose Estimation
In order to solve decentralized SLAM, measurements that relate poses between different robots (interrobot measurements) need to be established. A popular type of interrobot measurements are direct measurements of the other robot [22], such as timeofflight distance measurements [4] or visionbased relative pose etimation [15]. The latter is typically aided with markers that are deployed on the robots. To the best of our knowledge, most types of direct measurements require specialized hardware (which can precisely measure timeofflight for example, or visual markers). Furthermore, many types of direct measurements require the robots to be in line of sight, which, in many environments, imposes a major limitation on the set of relative poses that can be established, see Fig. 2.
Limiting relative measurements in this way translates into limitations for higherlevel applications that would like to use decentralized SLAM as a tool.
We prefer, instead, to use indirect relative measurements. These are established through registration of observations that two robots and make of the same scene [23, 3]. This requires no additional hardware: the sensors already used for odometry can be reused for interrobot measurements. Since establishing interrobot measurements in this way is equivalent to establishing loop closures in singlerobot SLAM, indirect measurements are widely used in centralized visual SLAM algorithms [24, 25, 26]. The establishment of indirect measurements is still limited by the communication range of the robots, but in practice it is feasible to establish a communication range that exceeds lineofsight. Furthermore, if two robots cannot communicate directly, they might still communicate through a multihop network. A disadvantage of indirect measurements, then, is that they require bandwidth to communicate. If all robots share all data with every other robot, the amount of data exchanged is quadratic in the number of robots. One way to mitigate the bandwidth is to exchange data only with immediate neighbours, but this has the same reduced recall as discussed in Fig. 2. Another way to reduce bandwidth is to use compact representations that allow to establish interrobot measurements with a minimal amount of data exchange. Visual featurebased maps can be compressed by compressing feature descriptors [27], substituting feature descriptors for corresponding cluster centers in a visual vocabulary [3], removing landmarks that prove not to be necessary for localization [28, 29] or using compact fullimage descriptors [30]. In previous work, we have shown another way to reduce the bandwidth of place recognition: data exchange can be reduced by a factor of the robot count if the problem can be cast to keyvalue lookup[31]. In this work, we use [1], an advanced version of [31] which at the same time makes use of a compact, stateoftheart place representation, NetVLAD [30]. [3] is used in association of visual features for relative pose estimation.
IiC Integrated Decentralized SLAM
Several systems have previously combined decentralized optimization and relative pose estimation into decentralized SLAM. Several optimization works cited in Section IIA, such as [4, 2] use direct relative measurements and can thus be considered full decentralized SLAM systems with the recall limitation illustrated in Fig. 2. DDFSAM [21] has been extended to a decentralized SLAM system in [32] by adding a data association system which matches planar landmark points. This system has been validated in the real world, but the landmarks consisted of manually placed poles that were detected by a laser range finder. [16] establishes relative poses by exchanging and aligning 2D laser scan edges. [2] has been extended to a decentralized SLAM system in [33] where data association is provided by identification and pose estimation of commonly observed objects. This approach relies on the existence of unique, pretrained objects in the environment. The majority of decentralized SLAM systems we have found in our literature review are not visionbased. A first decentralized visual SLAM system has been proposed in [34], but it relies on exchanging the full maps between robots. Furthermore, it is only evaluated on Lshaped trajectory and with only two robots, with images obtained from a simulated environment. In contrast, we evaluate our system on a large scale scenario, with more robots, on real data, and our system uses stateoftheart algorithms which exchange far less than the full maps.
Iii Methodology
The proposed decentralized SLAM system consists of robots which each execute the system illustrated in Fig. 3. The images taken by a camera are fed to both NetVLAD [30] and a visual odometry (VO). NetVLAD produces a compact image representation used for decentralized visual place recognition (DVPR) [1], which itself produces candidate place recognition matches. The VO produces a sparse feature map that can be used for localization. The relative pose estimation module (RelPose) extracts the parts of the sparse map that are observed at the candidate place matches and uses them to establish relative poses between the robots trajectories, or to reject candidate matches. The decentralized optimization module (DOpt) [2] obtains the initial guess from the map, intrarobot relative pose measurements from the VO and the interrobot relative pose measurements from RelPose, and updates the map. The system works continuously as new images are acquired.
Iiia IntraRobot Measurements
Intrarobot measurements are obtained from a visual odometry (VO) algorithm. Any VO algorithm can be used in our system as long as it fulfills the following requirements:
1) It produces a pose graph. The VO of robot defines frames and provides the intrarobot measurements
(1)  
(2) 
describing pose transforms between subsequent frames.
2) Each pose is associated to an image to be used for place recognition. We denote this image by . Our current system does not yet take full advantage of a multicamera setup, which could be a useful future extension. The image capture time is denoted as ^{1}^{1}1A common time reference is hard to establish, and, strictly speaking, not well defined, so consider a Lamport timestamp [35]. is not actually used in the implementation, so this detail is not particularly important..
3) The VO provides absolute scale, as implied by (2). Monocular VO typically estimates the pose up to a scale due to scale ambiguity. Scale can be obtained from a stereo camera or by fusing with inertial measurements.
In our implementation, we use for visual odometry ORBSLAM [36] in stereo configuration.
IiiB InterRobot Measurements
The interrobot measurements are
(3) 
with defined like (2). They are established in two phases: Given its latest image , robot first tries to determine whether there is any image of the same scene previously captured by another robot. This is achieved using decentralized visual place recognition, detailed in Section IIIC. Then, if , tries to establish using the method detailed in Section IIID. This twophase approach is pursued because, as we will see, establishing relative poses consumes more data than recognizing whether . The first phase is capable of reducing the amount of data to be exchanged by determining which robot to send the relative pose query to, and by determining whether it is worthwhile to send it in the first place, using a minimal amount of data.
IiiC Decentralized Visual Place Recognition
The first phase is accomplished by matching image to the images previously captured by the other robots, , using the fullimage descriptor NetVLAD [30]
. NetVLAD uses a neural network to calculate a vector
from . In rough terms, it is trained such that is lower if and observe the same scene than if they observe different scenes. For more details, see [30]. NetVLAD is wellsuited for decentralized place recognition, as is all that robot needs to send to robot to establish whether a matching image exists among . Concretely, we seek the image whose NetVLAD vector has the shortest distance to and which satisfies(4) 
where is a threshold parameter. can be found by sending to all other robots, but we use a method that requires times less data exchange [1], which is particularly interesting for scaling to large robot teams. In this method, each robot is preassigned a cluster center and each robot knows the of all other robots. The query is sent only to robot , and replies with the identifier of ,
(5) 
if also satisfies (4), where is the Voronoi cell . The query is executed for every frame of every robot, so at this query has already been executed , and is aware of all by virtue of having received the previous queries. It can thus provide without any extra data exchange. This method approximates ; if
. In the method, the cluster centers are determined using kmeans clustering on a training dataset, in our case the Oxford Robotcar Dataset
[37]. The properties of the method are discussed in more detail in [1].IiiD Relative Pose Estimation
Once and if has been established, sends a relative pose estimation request to . The relative pose estimation (RelPose) can and should heavily depend on the visual odometry (VO) since it can benefit from reusing byproducts of the VO such as 3D positions of landmarks. In our method, we use ORBSLAM [36] for VO and accordingly, our RelPose implementation imitates its loop closure method. However, it does not imitate [36] exactly, since the latter makes use of the data surrounding the match, which in decentralized RelPose would translate into a lot of data exchange. We want to avoid this and thus, we combine the relative pose estimation method form [36] with the visual data association method from [3], which reduces the data to be sent from to , , to a set of landmarks and visual word identifiers:
(6) 
where are the keypoints observed in , is the identifier of a visual word associated to keypoint and is the 3D position of the landmark corresponding to keypoint , expressed in the camera frame of . Visual words represent Voronoi cells in the keypoint descriptor space, here the space of ORB descriptors [38]. They provide a quantization of the highdimensional descriptor space, and since similar descriptors are likely to be assigned to the same word, word identifiers can be used for matching keypoints between images [3]. Since this is less precise than using raw descriptors, we provide an option to exchange full descriptors instead. Given , establishes pairs of matching keypoints . Given the corresponding pairs of landmark positions , RANSAC is used to determine an initial estimate of and the corresponding inlier matches . A place match is rejected if . Otherwise, the inlier landmark position pairs are used to refine to the pose that minimizes a robust 3D registration error
(7) 
We use a robust weight loss function
to reduce the weight of remaining keypoint match outliers.
To avoid relative pose outliers we use a consistency check, where is only accepted if either: a) there is another already accepted such that and are within a distance of and and are consistent; b) no accepted exists, but a previous candidate which fulfills the same conditions. and are considered consistent if the two ways of calculating the relative position between and – with and or with and – result in positions that are within a distance .
RelPose still constitutes a significant amount of data, so we introduce a parameter (minimum distance geometric verifications) which allows to throttle geometric verifications. Geometric verification are skipped for matches which are close to already established relative poses. is only sent for geometric verification to if there is no other matched to a frame of such that . This assumes that the VO and RelPose are consistent enough such that the resulting subset of retrievable suffices to produce a globallyconsistent state estimate.
IiiE Decentralized Optimization
In our decentralized optimization, each robot estimates its own trajectory using the available measurements, and leveraging occasional communication with other robots. The pose of frame is denoted with . The trajectory of robot is then denoted as . , where , represents the 3D rotation, and represents the 3D translation.
While our classification of the measurements (inter vs intra) is based on the robots involved in the measurement process, all relative measurements can be framed within the same measurement model. Since all measurements correspond to noisy observations of the relative pose between a pair of poses, say and , a general measurement model is:
(8) 
where the relative pose measurement includes the relative rotation measurements , which describes the attitude with respect to the reference frame , “plus” a random rotation (measurement noise), and the relative position measurement , which describes the position in the reference frame , plus random noise .
Assuming that the translation noise is distributed according to a zeromean Gaussian with information matrix , while the rotation noise follows a VonMises distribution with concentration parameter , it is possible to demonstrate [39] that the ML estimate can be computed as solution of the following optimization problem:
(9)  
As proposed in [33], we use a twostage approach to solve the optimization problem in a distributed manner: we first solve a relaxed version of (9) and get an estimate for the rotations of all robots, and then we recover the full poses and topoff the result with a GaussNewton (GN) iteration.
In both stages, we need to solve a linear system where the unknown vector can be partitioned into subvectors, such that each subvector contains the variables associated to a single robot in the team. For instance, we can partition the vector in the first stage, as , such that describes the rotations of robot . Similarly, we can partition ( represents the linearized poses) in the second stage, such that describes the trajectory of robot . Therefore, the linear systems in the first two stages can be framed in the general form :
(10) 
where we want to compute the vector given the (known) block matrix and the (known) block vector , partitioned according to the blockstructure of .
The linear system (10) can be rewritten as:
Taking the contribution of out of the sum, we get:
(11) 
The set of equations (11) is the same as the original system (10), but clearly exposes the contribution of the variables associated to each robot.
The distributed GaussSeidel algorithm [40] starts at an arbitrary initial estimate and applies the following update rule, for each robot :
(12) 
where is the set of robots that already computed the th estimate, while is the set of robots that still have to perform the update (12), excluding node (intuitively: each robot uses the latest estimate). We can see that if the sequence produced by the iterations (12) converges to a fixed point, then such point satisfies (11), and indeed solves the original linear system (10).
IiiF Making Optimization Work Continuously
Since the optimization consists of two stages, and the map estimate at the end of the first stage and between iterations is not a consistent map state, the map estimate handled by DOpt cannot be considered as best estimate at any given time. Furthermore, the optimization is not laid out to incorporate new measurements between iterations. As a consequnce, the map estimate is concurrently modified by the VO and DOpt, which would lead to inconsistencies if unchecked. This can be solved using optimistic concurrency control (OCC). In OCC, each of the processes that concurrently operate on some data operate on their own copy. Once a process is done, its copy is merged back to the reference state. OCC is a central concept in version control tools such as SVN and GIT, and has recently been applied to distributed robotic systems in [41] and [42]. In our method, the handling of VO and place recognition is considerd one process and decentralized optimization another. We let VO and place recognition operate directly on the reference state. Decentralized optimization, as described in Section IIIE is executed episodically. At the beginning of each episode, the robots consent on a reference time . During the episode, only the poses which satisfy are optimized. Once the optimization has finished, the of the reference state are replaced with the newly optimized estimates . The remaining for which are corrected with:
(13) 
which satisfies
(14) 
with referring to the frame recorded just before .
Iv Experiments
In the experiments, we seek to find out how much data the individual components and the overall system use and how accurate the state estimate is as time progresses. This will give us an idea of the applicability and scalability of the system, and will let us identify which components are most worthwhile to be optimized in the future.
Iva Data Exchange Evaluation
The components of our system that exchange data are decentralized visual place recognition, relative pose estimation and decentralized optimization. Each of these components could potentially be further developed to use less data. Thus, we record data transfer for each component individually. Data transfers for the individual components are:
(15)  
(16)  
(17) 
with from (6) and where is sent for the relative pose consistency check. The size of data is calculated accordingly, using bytes for poses , bytes for rotations , bytes for NetVLAD vectors , byte for robot indices, bytes for frame indices, bytes for visual word indices and bytes for landmarks .
IvB Accuracy Evaluation
As it is typically done for SLAM systems, we evaluate the accuracy of our system. Unlike of other SLAM systems, however, we evaluate the evolution of accuracy over time. This allows us to better characterize the system. Accuracy is measured using the average trajectory error (ATE). The ATE requires alignment to the ground truth trajectory, which is only meaningful for connected components. Thus, we report the ATE for different connected components individually.
IvC Parameter Studies
A parameter that we find of particular interest is , which determines the distance between an established relative pose and the next frame of to be sent to for geometrical verification. can significantly reduce the data to be transmitted for RelPose and DOpt, but it is important to verify that this does not happen at the cost of accuracy. We vary and see how it affects data transmission and ATE.
Another interesting parameter is the used NetVLAD dimension
. Since the last layer of NetVLAD performs principal component analysis,
can be tuned arbitrarily up to dimensions. The lower , the lower the DVPR traffic, but also the lower its precision.V Results
We evaluate our system on the KITTI dataset [43]. Like in [31], we split the dataset into parts to simulate trajectories recorded by different robots.
Fig. 4 shows the used subtrajectories of KITTI 00 and the end result of running our system with the parameter values shown in Table I.
The corresponding data transmission is shown in Fig. 5. Note that we plot the cumulative data transmission up to a given time. This directly exhibits the full amount of transmitted data and makes it easy to see timevarying bandwidth (slope). DOpt and RelPose require the most data transmission, but as we will see, DOpt traffic can be significantly reduced, while RelPose traffic remains high.
Fig. 6 shows the total data transmission between pairs of robots. This data transmission is highly uneven and could require consideration in multihop networks, but that is outside of the scope of this paper. DOpt and RelPose highly depend on the trajectory overlap between robots. The higher the overlap, the more data is exchanged by these components. Decentralized visual place recognition traffic is also uneven, but in a different way. It is due to the clustering that assigns NetVLAD cluster centers to robots. What we see in Fig. 6 is that robots 2, 6 and 7 are assigned the that represent the majority of NetVLAD descriptors in KITTI 00. This unevenness can be mitigated either by randomly assigning several clusters per robot or by training the clustering on a dataset that is very similar to the data at hand, in terms of the scope of visual appearances of keypoints. A detailed study of this problem is provided in [1].
Fig. 7 illustrates how the trajectories of the different robots fuse as time progresses, as well as the accuracy of the resulting connected components. Seven merge events of connected components are apparent around seconds 15, 23, 32, 33, 39, 46 and 47, the remaining two merges occur at the beginning of the experiment and are not visible in Fig. 7. At each merge event, there is a jump in connected component size and the corresponding lines in the plot get fused. In early merge events, the ATE grows as the drift of individual components is compounded with noise in relative pose measurements. This is most pronounced at the merge event around second 39. In later merge events, and as more relative measurements are added, loops are closed and the ATE decreases again, until it stabilizes around a value of 4 meters. This is well below 1% of the overall trajectory length.
Fig. 8 shows how increasing the minimum distance between geometric verifications can significantly reduce bandwidth requirements without negatively affecting accuracy. Fig. (a)a shows the place matches that remain and Fig. (b)b data transmission over time with .
Fig. 9 shows how varying the NetVLAD dimension affects the size of transmitted data, with . Unsurprisingly, DVPR traffic is quasilinear with respect to the NetVLAD dimension. However, using a too low dimension for NetVLAD results in lower precision, which results in more failed relative pose estimates. Since only suppresses relative pose queries if a relative pose query has previously been successful, this results in a significant increase of RelPose data transmission. Conversely, increasing NetVLAD dimensions beyond a certain point does not increase precision significantly. In Fig. 9 we can see that an efficient tradeoff is reached with .
We also evaluated our system on one of the sequences in MIT Stata Center Dataset [44]. Fig. 10 shows the groundtruth trajectory. Similar to the KITTI 00 dataset, we split the sequence into parts to simulate trajectories recorded by different robots. Fig. 11 shows the subtrajectories estimated using our method before and after the final loop closure. As we can see, the final estimate given by our method is not visually similar to the groundtruth estimate (Fig. 10). The main reason for this mismatch is that the graph structure in the case of MIT Stata Center dataset is closer to a chain topology. Due to the chain topology, decentralized optimization requires more iterations to reach consensus among robot trajectories on either ends of the chain and therefore finds it difficult to converge to the centralized estimate. Finding subtrajectories which follow certain graph topologies (chain vs grid) and are favorable for decentralized optimization can be a useful future extension.
Vi Conclusion
We have presented a new integrated decentralized visual SLAM algorithm that is based on stateoftheart components. The system has been characterized using publically available data and we have explored how data transmission can be reduced to a minimum. Based on our results, we believe that future developments of decentralized visual SLAM will focus on one hand on even more dataefficient data association and on the other hand on more robust decentralized optimization.
References
 [1] T. Cieslewski and D. Scaramuzza, “Efficient decentralized visual place recognition from fullimage descriptors,” IEEE International Symposium on MultiRobot and MultiAgent Systems, 2017.
 [2] S. Choudhary, L. Carlone, C. Nieto, J. Rogers, H. I. Christensen, and F. Dellaert, “Distributed trajectory estimation with privacy and communication constraints: a twostage distributed gaussseidel approach,” in IEEE Int. Conf. on Robotics and Automation, 2016.
 [3] D. Tardioli, E. Montijano, and A. R. Mosteo, “Visual data association in narrowbandwidth networks,” in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, 2015.
 [4] L. Paull, G. Huang, M. Seto, and J. Leonard, “Communicationconstrained multiAUV cooperative SLAM,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2015.
 [5] T. Bailey, M. Bryson, H. Mu, J. Vial, L. McCalman, and H. DurrantWhyte, “Decentralised cooperative localisation for heterogeneous teams of mobile robots,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2011.
 [6] V. Indelman, P. Gurfil, E. Rivlin, and H. Rotstein, “Graphbased distributed cooperative navigation for a general multirobot measurement model,” Intl. J. of Robotics Research, vol. 31, no. 9, August 2012.
 [7] A. Bahr, M. Walter, and J. Leonard, “Consistent cooperative localization,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2009.

[8]
J. Dong, E. Nelson, V. Indelman, N. Michael, and F. Dellaert, “Distributed realtime cooperative localization and mapping using an uncertaintyaware expectation maximization approach,” in
IEEE Intl. Conf. on Robotics and Automation (ICRA), 2015.  [9] S. Roumeliotis and G. Bekey, “Distributed multirobot localization,” IEEE Trans. Robot. Automat., August 2002.
 [10] S. Thrun and Y. Liu, “Multirobot SLAM with sparse extended information filters,” in Proceedings of the 11th International Symposium of Robotics Research (ISRR’03). Sienna, Italy: Springer, 2003.
 [11] A. Howard, “Multirobot simultaneous localization and mapping using particle filters,” Intl. J. of Robotics Research, 2006.
 [12] L. Carlone, M. K. Ng, J. Du, B. Bona, and M. Indri, “Simultaneous localization and mapping using RaoBlackwellized particle filters in multi robot systems,” J. of Intelligent and Robotic Systems, 2011.
 [13] A. Cunningham, V. Indelman, and F. Dellaert, “DDFSAM 2.0: Consistent distributed smoothing and mapping,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), Karlsruhe, Germany, May 2013.
 [14] L. Andersson and J. Nygards, “CSAM : Multirobot SLAM using square root information smoothing,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2008.
 [15] B. Kim, M. Kaess, L. Fletcher, J. Leonard, A. Bachrach, N. Roy, and S. Teller, “Multiple relative pose graphs for robust cooperative mapping,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2010.
 [16] M. Lazaro, L. Paz, P. Pinies, J. Castellanos, and G. Grisetti, “Multirobot SLAM using condensed measurements,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2013, pp. 1069–1076.

[17]
E. Nerurkar, S. Roumeliotis, and A. Martinelli, “Distributed maximum a posteriori estimation for multirobot cooperative localization,” in
IEEE Intl. Conf. on Robotics and Automation (ICRA), 2009.  [18] M. Franceschelli and A. Gasparri, “On agreement problems with Gossip algorithms in absence of common reference frames,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2010.
 [19] R. Aragues, L. Carlone, G. Calafiore, and C. Sagues, “Multiagent localization from noisy relative pose measurements,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2011, pp. 364–369.
 [20] J. Knuth and P. Barooah, “Collaborative localization with heterogeneous interrobot measurements by Riemannian optimization,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2013, pp. 1534–1539.
 [21] A. Cunningham, M. Paluri, and F. Dellaert, “DDFSAM: Fully distributed slam using constrained factor graphs,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2010.
 [22] X. Zhou and S. Roumeliotis, “Multirobot SLAM with unknown initial correspondence: The robot rendezvous case,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS). IEEE, 2006, pp. 1785–1792.
 [23] R. Aragues, E. Montijano, and C. Sagues, “Consistent data association in multirobot systems with limited communications,” in Robotics: Science and Systems, 2011, pp. 97–104.
 [24] C. Forster, S. Lynen, L. Kneip, and D. Scaramuzza, “Collaborative monocular slam with multiple micro aerial vehicles,” in IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2013.
 [25] L. Riazuelo, J. Civera, and J. Montiel, “C2tam: A cloud framework for cooperative tracking and mapping,” Robotics and Autonomous Systems, 2014.
 [26] P. Schmuck and M. Chli, “Multiuav collaborative monocular slam,” in IEEE Int. Conf. on Robotics and Automation, 2017.
 [27] S. Lynen, T. Sattler, M. Bosse, J. Hesch, M. Pollefeys, and R. Siegwart, “Get out of my lab: Largescale, realtime visualinertial localization,” in Robotics: Science and Systems, 2015.
 [28] M. Dymczyk, S. Lynen, T. Cieslewski, M. Bosse, R. Siegwart, and P. Furgale, “The gist of mapssummarizing experience for lifelong localization,” in IEEE Int. Conf. on Robotics and Automation, 2015.
 [29] L. Contreras and W. MayolCuevas, “Opoco: Online point cloud compression mapping for visual odometry and slam,” in IEEE Int. Conf. on Robotics and Automation, 2017.

[30]
R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “Netvlad: Cnn
architecture for weakly supervised place recognition,” in
IEEE Conf. on Computer Vision and Pattern Recognition
, 2016.  [31] T. Cieslewski and D. Scaramuzza, “Efficient decentralized visual place recognition using a distributed inverted index,” IEEE Robotics and Automation Letters, 2017.
 [32] A. Cunningham, K. Wurm, W. Burgard, and F. Dellaert, “Fully distributed scalable smoothing and mapping with robust multirobot data association,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), St. Paul, MN, 2012.
 [33] S. Choudhary, L. Carlone, C. Nieto, J. Rogers, H. I. Christensen, and F. Dellaert, “Distributed mapping with privacy and communication constraints: Lightweight algorithms and objectbased models,” arXiv preprint arXiv:1702.03435, 2017.
 [34] G. Bresson, R. Aufrère, and R. Chapuis, “Consistent multirobot decentralized slam with unknown initial positions,” in Information Fusion (FUSION), 2013 16th International Conference on, 2013.
 [35] L. Lamport, “Time, clocks, and the ordering of events in a distributed system,” Communications of the ACM, 1978.
 [36] R. MurArtal, J. Montiel, and J. D. Tardós, “Orbslam: a versatile and accurate monocular slam system,” IEEE Transactions on Robotics, 2015.
 [37] W. Maddern, G. Pascoe, C. Linegar, and P. Newman, “1 year, 1000 km: The oxford robotcar dataset.” The Intl. J. of Robotics Research, 2017.
 [38] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in Int. Conf. on Computer Vision, 2011.
 [39] L. Carlone, D. Rosen, G. Calafiore, J. Leonard, and F. Dellaert, “Lagrangian duality in 3D SLAM: Verification techniques and optimal solutions,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2015.
 [40] D. Bertsekas and J. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods. Englewood Cliffs, NJ: PrenticeHall, 1989.
 [41] T. Cieslewski, S. Lynen, M. Dymczyk, S. Magnenat, and R. Siegwart, “Map apiscalable decentralized map building for robots,” in IEEE Int. Conf. on Robotics and Automation, 2015.
 [42] M. Gadd and P. Newman, “Checkout my map: Version control for fleetwide visual localisation,” in IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2016.
 [43] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The Intl. J. of Robotics Research, 2013.
 [44] M. F. Fallon, H. Johnnsson, M. Kaess, and J. J. Leonard, “The MIT Stata Center dataset,” vol. 32, pp. 1695–1699, 2013.
Comments
There are no comments yet.