HiMODE: A Hybrid Monocular Omnidirectional Depth Estimation Model

04/11/2022
by   Masum Shah Junayed, et al.
0

Monocular omnidirectional depth estimation is receiving considerable research attention due to its broad applications for sensing 360 surroundings. Existing approaches in this field suffer from limitations in recovering small object details and data lost during the ground-truth depth map acquisition. In this paper, a novel monocular omnidirectional depth estimation model, namely HiMODE is proposed based on a hybrid CNN+Transformer (encoder-decoder) architecture whose modules are efficiently designed to mitigate distortion and computational cost, without performance degradation. Firstly, we design a feature pyramid network based on the HNet block to extract high-resolution features near the edges. The performance is further improved, benefiting from a self and cross attention layer and spatial/temporal patches in the Transformer encoder and decoder, respectively. Besides, a spatial residual block is employed to reduce the number of parameters. By jointly passing the deep features extracted from an input image at each backbone block, along with the raw depth maps predicted by the transformer encoder-decoder, through a context adjustment layer, our model can produce resulting depth maps with better visual quality than the ground-truth. Comprehensive ablation studies demonstrate the significance of each individual module. Extensive experiments conducted on three datasets; Stanford3D, Matterport3D, and SunCG, demonstrate that HiMODE can achieve state-of-the-art performance for 360 monocular depth estimation.

READ FULL TEXT

page 17

page 18

page 19

page 20

page 21

page 22

page 23

page 24

research
09/29/2022

Lightweight Monocular Depth Estimation with an Edge Guided Network

Monocular depth estimation is an important task that can be applied to m...
research
01/17/2023

SwinDepth: Unsupervised Depth Estimation using Monocular Sequences via Swin Transformer and Densely Cascaded Network

Monocular depth estimation plays a critical role in various computer vis...
research
05/02/2023

High-Resolution Synthetic RGB-D Datasets for Monocular Depth Estimation

Accurate depth maps are essential in various applications, such as auton...
research
12/06/2022

Event-based Monocular Dense Depth Estimation with Recurrent Transformers

Event cameras, offering high temporal resolutions and high dynamic range...
research
12/13/2021

Hformer: Hybrid CNN-Transformer for Fringe Order Prediction in Phase Unwrapping of Fringe Projection

Recently, deep learning has attracted more and more attention in phase u...
research
08/04/2020

MSDPN: Monocular Depth Prediction with Partial Laser Observation using Multi-stage Neural Networks

In this study, a deep-learning-based multi-stage network architecture ca...
research
03/08/2022

Lightweight Monocular Depth Estimation through Guided Decoding

We present a lightweight encoder-decoder architecture for monocular dept...

Please sign up or login with your details

Forgot password? Click here to reset