Surface Normals in the Wild

04/10/2017
by   Weifeng Chen, et al.
0

We study the problem of single-image depth estimation for images in the wild. We collect human annotated surface normals and use them to train a neural network that directly predicts pixel-wise depth. We propose two novel loss functions for training with surface normal annotations. Experiments on NYU Depth and our own dataset demonstrate that our approach can significantly improve the quality of depth estimation in the wild.

READ FULL TEXT

page 2

page 4

page 8

page 11

page 12

research
10/07/2022

IronDepth: Iterative Refinement of Single-View Depth using Surface Normal and its Uncertainty

Single image surface normal estimation and depth estimation are closely ...
research
06/25/2018

Learning Single-Image Depth from Videos using Quality Assessment Networks

Although significant progress has been made in recent years, depth estim...
research
10/03/2019

A Neural Network for Detailed Human Depth Estimation from a Single Image

This paper presents a neural network to estimate a detailed depth map of...
research
10/11/2021

Omnidata: A Scalable Pipeline for Making Multi-Task Mid-Level Vision Datasets from 3D Scans

This paper introduces a pipeline to parametrically sample and render mul...
research
04/13/2016

Single-Image Depth Perception in the Wild

This paper studies single-image depth perception in the wild, i.e., reco...
research
07/26/2020

OASIS: A Large-Scale Dataset for Single Image 3D in the Wild

Single-view 3D is the task of recovering 3D properties such as depth and...
research
12/13/2020

GeoNet++: Iterative Geometric Neural Network with Edge-Aware Refinement for Joint Depth and Surface Normal Estimation

In this paper, we propose a geometric neural network with edge-aware ref...

Please sign up or login with your details

Forgot password? Click here to reset