AutoLay: Benchmarking amodal layout estimation for autonomous driving

08/20/2021
by   Kaustubh Mani, et al.
32

Given an image or a video captured from a monocular camera, amodal layout estimation is the task of predicting semantics and occupancy in bird's eye view. The term amodal implies we also reason about entities in the scene that are occluded or truncated in image space. While several recent efforts have tackled this problem, there is a lack of standardization in task specification, datasets, and evaluation protocols. We address these gaps with AutoLay, a dataset and benchmark for amodal layout estimation from monocular images. AutoLay encompasses driving imagery from two popular datasets: KITTI and Argoverse. In addition to fine-grained attributes such as lanes, sidewalks, and vehicles, we also provide semantically annotated 3D point clouds. We implement several baselines and bleeding edge approaches, and release our data and code.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 8

research
02/19/2020

MonoLayout: Amodal scene layout from a single image

In this paper, we address the novel, highly challenging problem of estim...
research
04/30/2022

ONCE-3DLanes: Building Monocular 3D Lane Detection

We present ONCE-3DLanes, a real-world autonomous driving dataset with la...
research
03/16/2021

RackLay: Multi-Layer Layout Estimation for Warehouse Racks

Given a monocular colour image of a warehouse rack, we aim to predict th...
research
03/17/2023

A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving

The task of estimating 3D occupancy from surrounding view images is an e...
research
05/25/2021

SBEVNet: End-to-End Deep Stereo Layout Estimation

Accurate layout estimation is crucial for planning and navigation in rob...
research
12/09/2019

Learning a Layout Transfer Network for Context Aware Object Detection

We present a context aware object detection method based on a retrieve-a...
research
11/30/2022

MVRackLay: Monocular Multi-View Layout Estimation for Warehouse Racks and Shelves

In this paper, we propose and showcase, for the first time, monocular mu...

Please sign up or login with your details

Forgot password? Click here to reset