CompletionFormer: Depth Completion with Convolutions and Vision Transformers

04/25/2023
by   Zhang Youmin, et al.
0

Given sparse depths and the corresponding RGB images, depth completion aims at spatially propagating the sparse measurements throughout the whole image to get a dense depth prediction. Despite the tremendous progress of deep-learning-based depth completion methods, the locality of the convolutional layer or graph model makes it hard for the network to model the long-range relationship between pixels. While recent fully Transformer-based architecture has reported encouraging results with the global receptive field, the performance and efficiency gaps to the well-developed CNN models still exist because of its deteriorative local feature details. This paper proposes a Joint Convolutional Attention and Transformer block (JCAT), which deeply couples the convolutional attention layer and Vision Transformer into one block, as the basic unit to construct our depth completion model in a pyramidal structure. This hybrid architecture naturally benefits both the local connectivity of convolutions and the global context of the Transformer in one single model. As a result, our CompletionFormer outperforms state-of-the-art CNNs-based methods on the outdoor KITTI Depth Completion benchmark and indoor NYUv2 dataset, achieving significantly higher efficiency (nearly 1/3 FLOPs) compared to pure Transformer-based methods. Code is available at <https://github.com/youmi-zym/CompletionFormer>.

READ FULL TEXT

page 2

page 8

page 9

research
07/10/2022

Depthformer : Multiscale Vision Transformer For Monocular Depth Estimation With Local Global Information Fusion

Attention-based models such as transformers have shown outstanding perfo...
research
08/14/2023

Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation

Vision transformers are effective deep learning models for vision tasks,...
research
01/31/2018

In Defense of Classical Image Processing: Fast Depth Completion on the CPU

With the rise of data driven deep neural networks as a realization of un...
research
04/29/2022

SideRT: A Real-time Pure Transformer Architecture for Single Image Depth Estimation

Since context modeling is critical for estimating depth from a single im...
research
03/25/2021

High-Fidelity Pluralistic Image Completion with Transformers

Image completion has made tremendous progress with convolutional neural ...
research
12/22/2020

Learning Joint 2D-3D Representations for Depth Completion

In this paper, we tackle the problem of depth completion from RGBD data....
research
04/02/2021

TFill: Image Completion via a Transformer-Based Architecture

Bridging distant context interactions is important for high quality imag...

Please sign up or login with your details

Forgot password? Click here to reset