DeepBbox: Accelerating Precise Ground Truth Generation for Autonomous Driving Datasets

by   Govind Rathore, et al.

Autonomous driving requires various computer vision algorithms, such as object detection and tracking.Precisely-labeled datasets (i.e., objects are fully contained in bounding boxes with only a few extra pixels) are preferred for training such algorithms, so that the algorithms can detect exact locations of the objects. However, it is very time-consuming and hence expensive to generate precise labels for image sequences at scale. In this paper, we propose DeepBbox, an algorithm that corrects loose object labels into right bounding boxes to reduce human annotation efforts. We use Cityscapes dataset to show annotation efficiency and accuracy improvement using DeepBbox. Experimental results show that, with DeepBbox,we can increase the number of object edges that are labeled automatically (within 1% error) by 50 annotation time.


Iterative Bounding Box Annotation for Object Detection

Manual annotation of bounding boxes for object detection in digital imag...

BDD100K: A Diverse Driving Video Database with Scalable Annotation Tooling

Datasets drive vision progress and autonomous driving is a critical visi...

Large Scale Business Discovery from Street Level Imagery

Search with local intent is becoming increasingly useful due to the popu...

Learning to Transfer Privileged Information

We introduce a learning framework called learning using privileged infor...

Explainable Abstract Trains Dataset

The Explainable Abstract Trains Dataset is an image dataset containing s...

Extreme clicking for efficient object annotation

Manually annotating object bounding boxes is central to building compute...

Label Efficient Visual Abstractions for Autonomous Driving

It is well known that semantic segmentation can be used as an effective ...