Decentralization and Acceleration Enables Large-Scale Bundle Adjustment

05/11/2023
by   Taosha Fan, et al.
0

Scaling to arbitrarily large bundle adjustment problems requires data and compute to be distributed across multiple devices. Centralized methods in prior works are only able to solve small or medium size problems due to overhead in computation and communication. In this paper, we present a fully decentralized method that alleviates computation and communication bottlenecks to solve arbitrarily large bundle adjustment problems. We achieve this by reformulating the reprojection error and deriving a novel surrogate function that decouples optimization variables from different devices. This function makes it possible to use majorization minimization techniques and reduces bundle adjustment to independent optimization subproblems that can be solved in parallel. We further apply Nesterov's acceleration and adaptive restart to improve convergence while maintaining its theoretical guarantees. Despite limited peer-to-peer communication, our method has provable convergence to first-order critical points under mild conditions. On extensive benchmarks with public datasets, our method converges much faster than decentralized baselines with similar memory usage and communication load. Compared to centralized baselines using a single device, our method, while being decentralized, yields more accurate solutions with significant speedups of up to 953.7x over Ceres and 174.6x over DeepLM. Code: https://github.com/facebookresearch/DABA.

READ FULL TEXT
research
12/02/2021

MegBA: A High-Performance and Distributed Library for Large-Scale Bundle Adjustment

Large-scale Bundle Adjustment (BA) is the key for many 3D vision applica...
research
10/14/2022

Communication-Efficient Topologies for Decentralized Learning with O(1) Consensus Rate

Decentralized optimization is an emerging paradigm in distributed learni...
research
03/24/2023

IMA-GNN: In-Memory Acceleration of Centralized and Decentralized Graph Neural Networks at the Edge

In this paper, we propose IMA-GNN as an In-Memory Accelerator for centra...
research
02/22/2020

Communication-Efficient Decentralized Learning with Sparsification and Adaptive Peer Selection

Distributed learning techniques such as federated learning have enabled ...
research
07/17/2023

Distributed bundle adjustment with block-based sparse matrix compression for super large scale datasets

We propose a distributed bundle adjustment (DBA) method using the exact ...
research
10/17/2019

RPBA – Robust Parallel Bundle Adjustment Based on Covariance Information

A core component of all Structure from Motion (SfM) approaches is bundle...
research
02/04/2020

HiveMind: A Scalable and Serverless Coordination Control Platform for UAV Swarms

Swarms of autonomous devices are increasing in ubiquity and size. There ...

Please sign up or login with your details

Forgot password? Click here to reset