1 Introduction
Estimating accurate poses of cameras and locations of 3D scene points from a collection of images obtained by the cameras is a classic problem in computer vision, referred to as structure from motion (SfM). Optimizing for the camera parameters and scene points using the corresponding points in images, known as Bundle Adjustment (BA), is an important component of SfM [7, 8, 21].
Many recent approaches for BA can be divided into three categories: (a) those that pose BA as nonlinear least squares [10, 13, 21], (b) those that decouple the problem in each camera using a triangulationresection procedure for estimation [15, 18], and (c) those that pose and solve BA in a linear algebraic formulation [6]. Some important considerations of these methods are reducing the computational complexity by exploiting the structure of the problem [1, 4, 13]
, incorporating robustness to outlier observations or correspondence mismatches
[2, 26], distributing the computations or making the algorithm incremental [9, 11, 23, 24, 5] and making the algorithm insensitive to initial conditions [6]. In this paper, we develop robust distributed BA over camera and scene points.Our approach is ideally suited for applications where image acquisition and processing must be distributed, such as in a network of unmanned aerial vehicles (UAVs). We assume that each UAV in the network has a camera and a processor; each camera acquires an image of the 3D scene, and the processors in the different UAVs cooperatively estimate the 3D point cloud from the images. Therefore, we use the terms camera, processor, and UAV in an equivalent sense throughout the paper. We also assume that corresponding points from the images are available (possibly estimated using a different distributed algorithm), and are only concerned about estimating the 3D scene points given the correspondences.
Robust approaches, such as [2, 26], are typically used to protect world point and camera parameter estimates from effects of outliers, which for BA are incorrect point correspondences that have gone undetected. In contrast, we use robust formulations to accelerate consensus in the distributed formulation. Depending on how distribution is achieved, every processor performing computation may see only a small portion of the total data, and attempt to use it to infer its local parameters. Small sample means can be extreme, even when the original sample is wellbehaved (i.e. even when reprojection errors are truly Gaussian). In the limiting case, each processor may only base its computation on one data point, and therefore outliers are guaranteed to occur (from the point of view of individual processors) as an artifact of distributing the computation. Hence we hypothesize that using robust losses for penalizing reprojection errors, and quadratic losses for enforcing consensus improves performance.
Our proposed robust BA approach supports a natural distributed parallel implementation. We distribute the world points and camera parameters as illustrated for a simple case of cameras and scene points in Figure 1. The algorithm is developed using distributed alternating direction method of multipliers (DADMM) [3]. Each processor updates its copy of a set of parameters, while the updated estimates and dual variables ensure consensus. Distributing both the world points and the camera parameters yields iterations with required operations in a serial setting, where is the total number of 2D observations. In a fully parallel setting, it is possible to bring the time complexities down to per iteration, a vast improvement compared to traditional and sparse versions of BA, whose complexities are and respectively [13] (with and the number of cameras and 3D scene points). We also exploit the sparsity of the camera network, since not all cameras observe all scene points.
Another optimizationbased distributed approach for BA was recently proposed [5] ^{1}^{1}1The initial version of our method was proposed at the same time as [5]. Authors of [5] distributed camera parameters, and performed synthetic experiments using an existing 3D point cloud reconstruction, perturbing it using moderate noise, and generating image points using known camera models. We go further, distributing both world points and camera parameters in a flexible manner, and we implement the entire BA pipeline for 3D reconstruction: performing feature detection, matching corresponding points, and applying the robust distributed DADMM BA technique in real data settings.
2 Background
2.1 The camera imaging process
We denote the
camera parameter vectors by
, the 3D scene points as , and the 2D image points as . Each 2D image point is obtained by the transformation and projection of a 3D scene point by the camera . BA is an inverse problem, where camera parameters and 3D world points are estimated from the observations . The forward model is a nonlinear camera transformation function .The number of image points is typically much smaller than , since not all cameras image all scene points. The camera parameter vector () usually includes position, Euler angles, and focal length. In this discussion, we assume focal length is known for simplicity, and comprises Euler angles and the translation vector .
Denote the diagonal focal length matrix as , with the first two diagonal elements set to the focal length and the last element set to . The rotation matrix is represented as , where are rotations along the three axes of . The camera transformation is now given as . The final 2D image point is obtained by a perspective projection, with coordinates given by
(1) 
2.2 Bundle adjustment
Given the 2D points in multiple images that represent the same scene point, BA is typically formulated as a nonlinear least squares problem:
(2) 
The set contains if the scene point is imaged by the camera . The number of unknowns in this objective is , and hence it is necessary to have at least this many observations to obtain a good solution; in practice the number of observations is much larger. Problem (2) is solved iteratively, with descent direction () found by replacing in (2) by its linearization
where . The LevenbergMarquardt (LM) algorithm [16] is often used for BA.
The naive LM algorithm requires operations for each iteration, and memory on the order of , since we must invert of an matrix at each iteration. However, exploiting matrix structure and using the Schur complement approach proposed in [13], the number of arithmetic operations can be reduced to , and memory use to . Further reduction can be achieved by exploiting secondary sparse structure [10]. The conjugate gradient approaches in [1, 4] can reduce the time complexity to per iteration, making it essentially linear in the number of cameras.
Another popular approach to reduce the computational complexity involves decoupling of the optimization by explicitly estimating the scene point using backprojection in the intersection step and estimating the camera parameters in the resection step [18]. The resection step decouples into independent problems, and hence the overall procedure has a cost of per iteration. A similar approach, but with the minimization of norm of the reprojection error was proposed in [15]. It was shown to be more reliable and degraded gracefully with noise compared to based BA algorithms. Recently Wu proposed an incremental approach for bundle adjustment [23], where a partial BA or a full BA is performed after adding each camera and associated scene points to the set of unknown parameters, again with a complexity of . We use the ADMM framework to develop our approach.
2.3 Alternating Direction Method of Multipliers
ADMM is a simple and powerful procedure wellsuited for distributed optimization [12], see also [3]. In order to understand DADMM, consider the objective . We introduce local variables with a consensus equality constraint:
(3)  
subject to 
To solve this problem, we first write down an augmented Lagrangian [19]:
(4) 
where is the penalty parameter, is the Lagrangian multiplier for the constraint, and is the augmentation term that measures the distance individual variables and the consensus variable . We then find a saddle point using three steps to update , , and . Typically is chosen to the squared Euclidean distance in which case (4) becomes the proximal Lagrangian [19], but other distance or divergence measures can also be used.
3 Algorithmic formulation
3.1 Distributed estimation of scene points and camera parameters
We distribute the estimation among both the scene points and the camera parameters as illustrated in Figure 1. We estimate the camera parameter and the scene point corresponding to each image point independently, and then impose appropriate equality constraints. Eqn. (2) can be written as
(5)  
(6)  
(7) 
The augmented Lagrangian, with dual variables and , is given by
(8) 
Here measures the distance between the distributed world points and their consensus estimates, and distributed camera parameters and their consensus estimates. For we compare squared Euclidean and Huber losses, and is always the squared Euclidean loss.
The ADMM iteration is given by
(9) 
(10)  
(11)  
(12)  
(13) 
The equation (9) has to be solved for all , and it can be trivially distributed across multiple processes. When is squared distance, can be solved using the GaussNewton method [17], where we repeatedly linearize around the current solution and update . When is the Huber loss, we use limited memory BFGS (LBFGS) [17] to update the distributed scene points. Upon convergence, we will obtain the consensus estimates and for all scene points and cameras.
3.1.1 Convergence Analysis
We show that under certain assumptions the proposed DADMM algorithm in Section 3.1 converges, using the nonconvex and nonsmooth framework developed by [22].
Theorem 1 The DADMM algorithm proposed in Section 3.1 to the stationary point of the augmented Lagrangian in 8 when:

is the perspective camera projection model,

and are sufficiently large.
Proof Let be the stack of , and . Similarly each pair of consensus variables are stacked as the vector , and . and are respectively equivalent to and in [22]. We show that the five assumptions (A1A5) of [22, Thm. 1] are satisfied.

Given our assumptions, the objective function in (6) is coercive, i.e., it tends to as (A1).

The feasibility and subminimization path conditions are also satisfied since the constraint matrices are easily seen to be full rank (A2A3).

Our objective with respect to the consensus variable is identically 0, which is trivially regular (A5).
3.1.2 Time Complexity
Optimizing (9) takes time for each round of updates, since (9) must be solved times, with each solve requiring constant time. The time complexity of the consensus steps for camera parameters and world points given by (10) and (11) are and respectively. For the Lagrangian parameter updates given by (12) and (13), the time complexity is . Hence the dominant time complexity of the proposed algorithm is for each round. Since the algorithm can be trivially parallelized, the complexity can be brought down to for each round, if we distribute all the observations to individual processors.
3.1.3 Communication Overhead
Considering a sparse UAV network, assume that each world point is imaged by cameras. Each camera needs to maintain a copy of the consensus world points . Therefore to update using (10), each camera needs to obtain individual estimates of and send its version of to other cameras. Values can be updated locally in each camera, given , and previous versions of using (12). Hence, for each world point we have a communication overhead of floating points per iteration (each world point is a 3D vector). Hence for world points, the communication overhead is floating points per iteration, where depends on the distance of the camera from the scene.
3.1.4 Generalized Distributed Estimation
The problem (9) requires each processor to estimate parameters from a single 2D observation. To control the variability of individual estimates as the algorithm proceeds, we generalize the approach to use more than one observation and hence more than one scene point and camera vector during each update step. This generalized step provides flexibility to adjust the number of 3D scene points and cameras based on computational capability of each thread in a CPU or a GPU. We solve
(14) 
where
(15) 
4 Experiments
We perform several experiments with synthetic data and real data to show the convergence of the reprojection error and the parameter estimates. We also compare the performance of the proposed approach a the centralized BA algorithm that we implemented using LM. The LM stops when the reprojection error drops below , or when the regularization parameter becomes greater than . We implement our distributed approach in a single multicore computer and not in a sparse UAV network, but our architecture is wellsuited for a networked UAV application.
4.1 Synthetic Data
We simulate a realistic scenario, with smooth camera pose transition, and noise parameters consistent with realworld sensor errors. Using the simulation, we evaluate the error in the estimated 3D scene point cloud and the camera parameters, and investigate how estimation error of camera pose affects the final tie points triangulation.
The camera positions are sampled around an orbit, with an average radius 1000m and altitude 1500m, with the camera directed towards a specific area. To each camera pose, a random translation and rotation is added as any real observer cannot move in a perfect circle while steadily aiming always in the same exact direction. The camera path and the 3D scene points for an example scenario are shown in Figure 2. In practice, tie points are usually visible only within a small subset of the available views, and it is generally not practical to try to match all key points within each possible pair of frames. Instead, points are matched within adjacent frames. In our synthetic data, we create artificial occlusions or misdetection so that each point is only visible on a few consecutive frames.
4.2 Convergence and Runtime
We investigate convergence of the reprojection error and parameters for DADMM BA, comparing the convergence when is squared vs. Huber in (5), and always the squared . The number of cameras is , the number of scene points is , and the number of 2D image points (observations) is
. We fix the standard deviation for the additive Gaussian noise during the initialization of the camera angles and positions to be
. We vary the standard deviation of noise for the scene points from to . Introducing robust losses for misfit penalty helps the convergence of the reprojection error significantly, see Figure 3, (a) vs. (c). This behavior is observed with the convergence of the scene points, see Figures 3, (b) vs. (d), and camera parameters. The Huber penalty is used to guard against outliers; here, outliers come from processors working with limited information. The performance degrades gracefully with noise, see Figure 3, (c) and (d).of scene variance, as shown in the legend. Consensus penalty
is always .We also compare DADMM BA with the centralized LM BA and present the results in Figure 4 (a) and (b). The number of camera parameters and 3D scene points are , , , , , and ; with the number of observations increasing as shown on the xaxis of Figure 4. In most settings, DADMM BA has a better parameter MSE than centralized LM BA. The runtime of the proposed approach with respect to the number of observations and parallel workers is shown in Figure 4 (c). The parallel workers are configured in MATLAB, and the runtime is linear with respect to the observations and reduces with increasing workers. Our implementation is a simple demonstration of the capability of the algorithm — a fully parallel implementation in a fast language such as C can realize its full potential.
4.3 Real Data
To demonstrate the performance of DADMM BA, we conducted experiments on real datasets with different settings. All experiments are done with MATLAB on a PC with a 2.7 GHz CPU and 16 GB RAM.
In our SFM pipeline, SIFT feature points [14] are used for detection and matching. The relative fundamental matrices are estimated for each pair of images with sufficient corresponding points, which are used to estimate relative camera pose and 3D structure. Next, the relative parameters are used to generate the global initial values for BA. The datasets were downloaded from the Princeton Vision Group and the EPFL Computer Vision Lab [20].
Since there are no ground truth 3D structures available for the real datasets, we compare the dense reconstruction results obtained using the method of [25]. The first dataset has five images and a sample image is shown in Figure 5 (a). After keypoint detection and matching, centralized LM BA and DADMM BA are given the same input. There are a total of world points and observations. The final reprojection error of LM and DADMM are and respectively. Figure 5 (c) and (d) shows that the dense reconstruction quality of LM and the DADMM are similar. Figure 5 (b) shows the convergence of reprojection error for the DADMM algorithm. Figure 6 (a) shows the convergence of reprojection error for different values of . Setting to a high value accelerates convergence.
We also estimate camera parameters and scene points, applying the approach of Section 3.1.4 to the same data set. Figure 6 (b) shows that as the number of scene points per iteration increase, the runtime decreases, with scene points per iteration giving the fastest convergence, see figure 6 (c). Figure 6 (d) compares reprojection errors with different number of cameras in each iteration. Initial values are the same as in the castleP30 experiment (Figure 11), and the number of scene points in each iteration is . Reprojection errors decrease faster as the number of cameras in each iteration increases.
We perform distributed BA on the HerzJesu dataset data set provided in [20] using the approach in Section 3.1.4. This data set has seven images, 1140 world points, and 2993 observations. In this experiment, the LM BA algorithm using the same setting as in previous experiments does not converge and has the final reprojection error about . Therefore, the dense reconstruction result is not presented. DADMM BA with eight scene points in each update step has a final reprojection error of . Figure 7(b) shows the dense 3D point cloud estimated with DADMM BA.
Additional results on other datasets (fountainP11, entryP10, HerzJesuP25, and castleP30) are presented in Table 1, Figure 8, 9, 10 and 11. is mean reprojection error. Figure 8, 9, 10 and 11 present different perspectives of the dense reconstruction results to show the robustness of 3D parameter estimations.
Dataset  Images  Scene pts  Obs  

fountainP11  11  1346  3859  0.5 
entryP10  10  1382  3687  0.7 
HerzJesuP25  25  2161  5571  0.87 
castleP30  30  2383  6453  0.84 
Settings are fixed across experiments, and the maximum iteration counter is set to The experiments on fountainP11 and HerzJesuP25 dataset (Figure 8 and 10) have better dense reconstruction results since there are more images covering the same regions. The real data experiments show DADMM BA achieves similar objective values (mean reprojection error ) as the number of observations increases; it is not necessary to increase the number of iterations as the size of the data increases. DADMM BA scales linearly with the number of observations and can be parallelized on GPU clusters.
5 Conclusions
We presented a new distribution algorithm for bundle adjustment, DADMM BA, which compares well to centralized approaches in terms of performance and scales well for SfM. Experimental results demonstrated the importance of robust formulations for improved convergence in the distributed setting. Even when there are no outliers in the initial data, robust losses are helpful because estimates of processors working with limited information can stray far from the aggregate estimates, see Figure 3. Formulation design for distributed optimization may yield further improvements; this is an interesting direction for future work.
Results obtained with DADMM BA are comparable to those obtained with stateoftheart centralized LM BA, and DADMM BA scales linearly in runtime with respect to the number of observations. Our approach is wellsuited for use in a networked UAV system, where distributed computation is an essential requirement.
References
 [1] S. Agarwal, N. Snavely, S. M. Seitz, and R. Szeliski. Bundle adjustment in the large. In Computer Vision–ECCV 2010, pages 29–42. Springer, 2010.
 [2] A. Aravkin, M. Styer, Z. Moratto, A. Nefian, and M. Broxton. Student’s t robust bundle adjustment algorithm. In Image Processing (ICIP), 2012 19th IEEE International Conference on, pages 1757–1760. IEEE, 2012.

[3]
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein.
Distributed optimization and statistical learning via the alternating
direction method of multipliers.
Foundations and Trends in Machine Learning
, 3(1):1–122, 2011.  [4] M. Byröd and K. Åström. Conjugate gradient bundle adjustment. In Computer Vision–ECCV 2010, pages 114–127. Springer, 2010.

[5]
A. Eriksson, J. Bastian, T.J. Chin, and M. Isaksson.
A consensusbased framework for distributed bundle adjustment.
In
Computer Vision and Pattern Recognition, 2015. CVPR 2015. IEEE Conference on
. IEEE, 2015.  [6] A. Fusiello and F. Crosilla. Solving bundle block adjustment by generalized anisotropic procrustes analysis. ISPRS Journal of Photogrammetry and Remote Sensing, 102:209–221, 2015.
 [7] R. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
 [8] J. Heinly, J. L. Schonberger, E. Dunn, and J.M. Frahm. Reconstructing the world in six days (as captured by the yahoo 100 million image dataset). In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3287–3295, 2015.
 [9] V. Indelman, R. Roberts, C. Beall, and F. Dellaert. Incremental light bundle adjustment. In Proceedings of the British Machine Vision Conference (BMVC 2012), pages 3–7, 2012.
 [10] K. Konolige and W. Garage. Sparse sparse bundle adjustment. In BMVC, pages 1–11. Citeseer, 2010.
 [11] J. Kopf, M. F. Cohen, and R. Szeliski. Firstperson hyperlapse videos. ACM Transactions on Graphics (TOG), 33(4):78, 2014.
 [12] P.L. Lions and B. Mercier. Splitting algorithms for the sum of two nonlinear operators. SIAM Journal on Numerical Analysis, 16(6):964–979, 1979.
 [13] M. I. Lourakis and A. A. Argyros. Sba: A software package for generic sparse bundle adjustment. ACM Transactions on Mathematical Software (TOMS), 36(1):2, 2009.
 [14] D. G. Lowe. Object recognition from local scaleinvariant features. In Computer vision, 1999. The proceedings of the seventh IEEE international conference on, volume 2, pages 1150–1157. IEEE, 1999.
 [15] K. Mitra and R. Chellappa. A scalable projective bundle adjustment algorithm using the l infinity norm. In Computer Vision, Graphics & Image Processing, 2008. ICVGIP’08. Sixth Indian Conference on, pages 79–86. IEEE, 2008.
 [16] J. J. Moré. The levenbergmarquardt algorithm: implementation and theory. In Numerical analysis, pages 105–116. Springer, 1978.
 [17] J. Nocedal and S. Wright. Numerical optimization. Springer Series in Operations Research. Springer, 1999.
 [18] M. D. Pritt. Fast orthorectified mosaics of thousands of aerial photographs from small uavs. In Applied Imagery Pattern Recognition Workshop (AIPR), 2014 IEEE, pages 1–8. IEEE, 2014.
 [19] R. T. Rockafellar and R. J.B. Wets. Variational analysis, volume 317. Springer Science & Business Media, 2009.
 [20] C. Strecha, W. von Hansen, L. V. Gool, P. Fua, and U. Thoennessen. On benchmarking camera calibration and multiview stereo for high resolution imagery. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008.
 [21] B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon. Bundle adjustment  a modern synthesis. In Vision algorithms: theory and practice, pages 298–372. Springer, 2000.
 [22] Y. Wang, W. Yin, and J. Zeng. Global convergence of admm in nonconvex nonsmooth optimization. arXiv preprint arXiv:1511.06324, 2015.
 [23] C. Wu. Towards lineartime incremental structure from motion. In 3D Vision3DV 2013, 2013 International Conference on, pages 127–134. IEEE, 2013.
 [24] C. Wu, S. Agarwal, B. Curless, and S. M. Seitz. Multicore bundle adjustment. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 3057–3064. IEEE, 2011.
 [25] J. Xiao, J. Chen, D.Y. Yeung, and L. Quan. Learning twoview stereo matching. In Computer Vision–ECCV 2008, pages 15–27. Springer, 2008.
 [26] J. Zhang, M. Boutin, and D. G. Aliaga. Robust bundle adjustment for structure from motion. In Image Processing, 2006 IEEE International Conference on, pages 2185–2188.
Comments
There are no comments yet.