BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration

04/05/2016
by   Angela Dai, et al.
0

Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results, but suffer from: (1) needing minutes to perform online correction preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking, and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real-time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real-time to ensure global consistency; all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results.

READ FULL TEXT

page 2

page 8

page 9

page 10

page 12

page 14

page 15

page 16

research
06/04/2022

C^3Fusion: Consistent Contrastive Colon Fusion, Towards Deep SLAM in Colonoscopy

3D colon reconstruction from Optical Colonoscopy (OC) to detect non-exam...
research
09/12/2017

Efficient Online Surface Correction for Real-time Large-Scale 3D Reconstruction

State-of-the-art methods for large-scale 3D reconstruction from RGB-D se...
research
03/28/2021

ManhattanSLAM: Robust Planar Tracking and Mapping Leveraging Mixture of Manhattan Frames

In this paper, a robust RGB-D SLAM system is proposed to utilize the str...
research
10/22/2022

A Flexible-Frame-Rate Vision-Aided Inertial Object Tracking System for Mobile Devices

Real-time object pose estimation and tracking is challenging but essenti...
research
10/12/2021

Event-Based high-speed low-latency fiducial marker tracking

Motion and dynamic environments, especially under challenging lighting c...
research
08/29/2023

3D-MuPPET: 3D Multi-Pigeon Pose Estimation and Tracking

Markerless methods for animal posture tracking have been developing rece...
research
10/29/2018

Real-Time RGB-D Camera Pose Estimation in Novel Scenes using a Relocalisation Cascade

Camera pose estimation is an important problem in computer vision. Commo...

Please sign up or login with your details

Forgot password? Click here to reset