DeepAI AI Chat
Log In Sign Up

Perceptual deep depth super-resolution

by   Oleg Voinov, et al.

RGBD images, combining high-resolution color and lower-resolution depth from various types of depth sensors, are increasingly common. One can significantly improve the resolution of depth maps by taking advantage of color information; deep learning methods make combining color and depth information particularly easy. However, fusing these two sources of data may lead to a variety of artifacts. If depth maps are used to reconstruct 3D shapes, e.g., for virtual reality applications, the visual quality of upsampled images is particularly important. The main idea of our approach is to measure the quality of depth map upsampling using renderings of resulting 3D surfaces. We demonstrate that a simple visual appearance-based loss, when used with either a trained CNN or simply a deep prior, yields significantly improved 3D shapes, as measured by a number of existing perceptual metrics. We compare this approach with a number of existing optimization and learning-based techniques.


Perceptually-based single-image depth super-resolution

RGBD images, combining high-resolution color and lower-resolution depth ...

Depth Superresolution using Motion Adaptive Regularization

Spatial resolution of depth sensors is often significantly lower compare...

Towards Unpaired Depth Enhancement and Super-Resolution in the Wild

Depth maps captured with commodity sensors are often of low quality and ...

Improving Multi-View Stereo via Super-Resolution

Today, Multi-View Stereo techniques are able to reconstruct robust and d...

Void Space Surfaces to Convey Depth in Vessel Visualizations

To enhance depth perception and thus data comprehension, additional dept...

BitNet: Learning-Based Bit-Depth Expansion

Bit-depth is the number of bits for each color channel of a pixel in an ...

Towards high-throughput 3D insect capture for species discovery and diagnostics

Digitisation of natural history collections not only preserves precious ...