DeepAI
Log In Sign Up

WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving

05/04/2019
by   Senthil Yogamani, et al.
Valeo
12

Fisheye cameras are commonly employed for obtaining a large field of view in surveillance, augmented reality and in particular automotive applications. In spite of its prevalence, there are few public datasets for detailed evaluation of computer vision algorithms on fisheye images. We release the first extensive fisheye automotive dataset, WoodScape, named after Robert Wood who invented the fisheye camera in 1906. WoodScape comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection. Semantic annotation of 40 classes at the instance level is provided for over 10,000 images and annotation for other tasks are provided for over 100,000 images. We would like to encourage the community to adapt computer vision models for fisheye camera instead of naive rectification.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

page 7

page 8

12/03/2020

Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline

Object detection is a comprehensively studied problem in autonomous driv...
03/09/2022

SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving

Surround-view cameras are a primary sensor for automated driving, used f...
05/03/2021

A Gun Detection Dataset and Searching for Embedded Device Solutions

Gun violence is a severe problem in the world, particularly in the Unite...
05/04/2021

Surveilling Surveillance: Estimating the Prevalence of Surveillance Cameras with Street View Data

The use of video surveillance in public spaces – both by government agen...
07/24/2018

The Double Sphere Camera Model

Vision-based motion estimation and 3D reconstruction, which have numerou...
07/01/2020

TiledSoilingNet: Tile-level Soiling Detection on Automotive Surround-view Cameras Using Coverage Metric

Automotive cameras, particularly surround-view cameras, tend to get soil...
05/04/2019

SoilingNet: Soiling Detection on Automotive Surround-View Cameras

Cameras are an essential part of sensor suite in autonomous driving. Sur...

Code Repositories

WoodScape

The repository containing tools and information about the WoodScape dataset.


view repo

1 Introduction

Fisheye lenses provide a large field of view (FOV) using a highly non-linear mapping instead of the standard perspective projection. However, it comes at the cost of strong radial distortion. Fisheye cameras are so-named because they relate to the view of the world that a fish has observing the water surface from below, a phenomenon known as Snell’s window. Robert Wood originally coined the term in 1906 [55], and constructed a basic fisheye camera by taking a pin-hole camera and filling it with water. It was later replaced with a hemispherical lens [3]. To pay homage to the original inventor and coiner of the term “fisheye”, we have named our dataset WoodScape.

Large FOV cameras are necessary for various computer vision application domains including video surveillance [27], augmented reality [44] and have been of particular interest in autonomous driving [19]. In automotive, rear-view fisheye cameras are commonly deployed in existing vehicles for dashboard viewing and reverse parking. While commercial autonomous driving systems typically make use of narrow FOV forward facing cameras at present, full perception is now investigated for handling more complex use cases. In spite of this growing interest, there is relatively little literature and datasets available. Some examples of the few datasets that have fisheye are: Visual SLAM ground truth for indoor scenes with omni-directional cameras in [5], SphereNet [7] containing 1200 labelled images of parked cars using cameras (not strictly fisheye) and, in automotive, the Oxford Robotcar dataset [35] containing a large scale relocalization dataset with fisheye cameras.

WoodScape is a comprehensive dataset for sensing around a vehicle using the four fisheye cameras shown in Figure 2. It aims at complementing the range of already existing automotive datasets where only narrow FOV image data is present: among those, KITTI [15] was the first pioneering dataset with a variety of tasks, which drove a lot of research for autonomous driving; Cityscapes [8] provided the first comprehensive semantic segmentation dataset and Mapillary [36] provided a significantly larger dataset; Apolloscape [22] and BDD100k [56] are more recent datasets that push the annotation scale further. WoodScape is unique in that it provides fisheye image data, along with a comprehensive range of annotation types. A comparative summary of these different datasets is provided in Table 1. The main contributions of WoodScape are as follows:

  1. [nolistsep]

  2. First fisheye dataset comprising of over 10,000 images containing instance level semantic annotation.

  3. Four-camera nine-task dataset designed to encourage unified multi-task and multi-camera models.

  4. Introduction of a novel soiling detection task and release of first dataset of its kind.

  5. Proposal of an efficient metric for the 3D box detection task which improves training time by 95x.

The paper is organized as follows. Section 2 provides an overview of fisheye camera model, undistortion methods and fisheye adaption of vision algorithms. Section 3 discusses the details of the dataset including goals, capture infrastructure and dataset design. Section 4 presents the list of supported tasks and baseline experiments. Finally, Section 5 summarizes and concludes the paper.

Figure 2: Sample images from the surround-view camera network showing near field sensing and wide field of view.

2 Overview of Fisheye Camera Projections

Fisheye cameras offer a distinct advantage for automotive applications. Given their extremely wide field of view, they can observe the full surrounding of a vehicle with a minimal number of sensors, with just four cameras typically being required for full 360 coverage (Figure 2). This advantage comes with some drawbacks in the significantly more complex projection geometry that fisheye cameras exhibit. That is, images from fisheye cameras display severe distortion, which is evidenced by all of the sample images used in this paper.

Typical camera datasets consist of narrow FOV camera data where a simple pinhole projection model is commonly employed. In case of fisheye camera images, it is imperative that the appropriate camera model is well understood either to handle distortion in the algorithm or to warp the image prior to processing. This section is intended to highlight to the reader that the fisheye camera model requires specific attention. We provide a brief overview and references for further details, and discuss the merits of operating on the raw fisheye versus linearization of the image prior to processing.

2.1 Fisheye Camera Models

Figure 3: Comparison of fisheye models.

Fisheye distortion is modelled by a radial mapping function , where is the distance on the image from the centre of distortion, and is a function of the angle of the incident ray against the optical axis of the camera system namely . Stereographic projection [20] is the simplest model which uses a mapping from a sphere to a plane. More recent projection models are Unified Camera Model (UCM) [1, 5] and eUCM (Enhanced UCM) [26]. More detailed analysis of accuracy of various projection models is discussed in detail in [23]. These models are not a perfect fit for fisheye cameras as they encode a specific geometry, and errors arising in the model are compensated by using an added distortion correction component.

In WoodScape, we provide a more generic fisheye intrinsic calibration that is independent of any specific projection model, and does not require the added step of distortion correction. Our model is based on a fourth order polynomial mapping incident angle to image radius in pixels (). In our experience, higher orders provide no additional accuracy.

As a comparison, to give the reader an understanding of how different models behave, Figure 3 shows the mapping function for five different projection models, which are Polynomial, Rectilinear, Stereographic, UCM and eUCM. The parameters of the 4th order polynomial are taken from a calibration of our fisheye lens. We optimized the parameters for the other models to match this model in a range of 0 to 120. The plot indicates that the difference to the original fourth order polynomial is about four pixels for UCM and one pixel for eUCM for low incident angles. For larger incident angles, these models are less precise.

Figure 4: Linearizing the fisheye image: (a) Rectilinear correction; (b) Piecewise linear correction; (c) Cylindrical correction.

2.2 Linearization vs. Adaptation

Standard computer vision models do not generalize easily to fisheye cameras because of large non-linear distortion. For example, translation invariance is lost for a standard CNN. The naïve way to develop algorithms for fisheye cameras is to perform rectilinear correction so that standard models can be applied. The simplest linearization is to re-warp pixels to a rectilinear image as shown in Figure 4 (a). But there are two major issues. Firstly, the FOV is greater than 180, hence there are rays incident from behind the camera and it is not possible to establish a complete mapping to a rectilinear viewport. This leads to a loss of FOV, this is seen via the missing yellow pillars in the corrected image. Secondly, there is an issue of resampling distortion, which is more pronounced near the periphery of the image where a smaller region gets mapped to a larger region.

The missing FOV can be resolved by multiple linear viewports as shown in Figure 4 (b). However there are issues in the transition region from one plane to another. This can be viewed as a piecewise linear approximation of the fisheye lens manifold. Figure 4 (c) demonstrates a quasi-linear correction using a cylindrical viewport, where it is linear in vertical direction and straight vertical objects like pedestrians are preserved. However, there is a quadratic distortion along the horizontal axis. In many scenarios, it provides a reasonable trade-off but it still has limitations. In case of learning algorithms, a parametric transform can be optimized for optimal performance of the target application accuracy.

Because of fundamental limitations of linearization, an alternate approach of adapting the algorithm incorporating fisheye model could be an optimal solution. In case of classical geometric algorithms, an analytical version of non-linear projection can be incorporated. For example, Kukelova et al. [30]

extend homography estimation by incorporating radial distortion model. In case of deep learning algorithms, a possible solution could be to train the CNN model to learn the distortion. However, the translation invariance assumption of CNN fundamentally breaks down due to spatially variant distortion and thus it is not efficient to let the network learn it implicitly. This had led to many adaptations of CNN to handle spherical images such as

[49] and [7]. However, spherical models do not provide an accurate fit for fisheye lenses and it is an open problem.

3 Overview of WoodScape Dataset

3.1 High-Level Goals

Fisheye: One of the main goals of this dataset is to encourage the research community to develop vision algorithms natively on fisheye images without undistortion. There are very few public fisheye datasets and none of them provide semantic segmentation annotation. Fisheye is particularly beneficial to automotive low speed manoeuvring scenarios such as parking where accurate full coverage near field sensing can be achieved with just four cameras.

Multi-camera: Surround view systems have at least four cameras rigidly connected to the body of the car. Pless [39] did pioneering work in deriving a framework for modeling a network of cameras as one, this approach is useful for geometric vision algorithms like visual odometry. However, for semantic segmentation algorithms, there is no literature on joint modeling of rigidly connected cameras.

Multi-task: Autonomous driving has various vision tasks and most of the work has been focused on solving individual tasks independently. However, there is a recent trend [29, 50, 48, 6] to solve tasks using a single multi-task model to enable efficient reuse of encoder features and also provide regularization while learning multiple tasks. However, in these cases, only the encoder is shared and there is no synergy among decoders. Existing datasets are primarily designed to facilitate task-specific learning and they don’t provide simultaneous annotation for all the tasks. We have designed our dataset so that simultaneous annotation is provided for various tasks where possible.

Figure 5: SLAM point cloud top-view of a parking lot.

3.2 Dataset Acquisition

Our diverse dataset originates from three distinct geographical locations: USA, Europe, and China. While the majority of data was obtained from saloon vehicles there is a significant subset from a sports utility vehicle ensuring a strong mix in sensor mechanical configurations. Driving scenarios are divided across the highway, urban driving and parking use cases. Intrinsic and extrinsic calibrations are provided for all sensors as well as timestamp files to allow synchronization of the data. Relevant vehicle’s mechanical data (e.g. wheel circumference, wheel base) are included. High-quality data is ensured via quality checks at all stages of the data collection process. Annotation data undergoes a rigorous quality assurance by highly skilled reviewers. The sensors recorded for this dataset are listed below:

  • [nosep]

  • 4x 1 Mpx RGB fisheye cameras ( hFOV)

  • 1x LiDAR rotating at 20Hz (Velodyne HDL-64E)

  • 1x GNSS/IMU (NovAtel Propak6 & SPAN-IGM-A1)

  • 1x GNSS Positioning with SPS (Garmin 18x)

  • Automotive grade Radar and Ultrasonics

  • Odometry signals from the vehicle bus

width=

Task/Info Quantity
KITTI
[15]
Cityscapes
[8]
Mapillary
[36]
nuScenes
[57]
ApolloScape
[22]
BDD100k
[56]
WoodScape
Ours
Capture Information Year 2012/14/15 2016 2017 2018 2018 2018 2018/19
State/cities 1/1 2/50 50+/100+ 2/2 1/4 1/4 5+/10+
Other sensors
1 LiDAR
GPS
- -
1 LiDAR
5 RADAR
GPS & IMU
2 LiDAR
GNSS
& IMU
GPS & IMU
LiDAR
RADAR & ULS
GNSS & IMU
Camera Information Cameras 4 - - 6 6 1 4
Tasks 6 1 1 1 4 2 9
Segmentation Classes 8 30 66 - 25 40 40
Frames 400 5k 25k - 140k 5.7k 10k
2D Bounding Box Classes 3 - - - - 10 7
Frames 15k - - - - 5.7k 10k
3D Bounding Box Classes 3 - - 25 1 - 3
Frames 15k - - 40k 5k+ - 10k
Depth Estimation Frames 93k - - - - - 400k
Motion Segmentation Frames 1.6k - - - - - 10k
Soiling Detection Frames - - - - - - 5k
Visual SLAM/Odometry Videos 33 - - - - - 50
End-to-end Driving Videos - - - - - - 500
Synthetic Data Frames - - - - - - 10k
Table 1: Summary of various autonomous driving datasets containing semantic annotation

Our WoodScape dataset provides labels for several autonomous driving tasks including semantic segmentation, monocular depth estimation, object detection (2D & 3D bounding boxes), Visual Odometry, Visual SLAM, motion segmentation, soiling detection and end-to-end driving (driving controls). In Table 1, we compare several properties of popular datasets against WoodScape. In addition to providing fisheye data, we provide data for many more tasks than is typical (nine in total), providing completely novel tasks such as soiled lens detection. Images are provided at 1MPx resolution and videos are uncompressed at 30fps 1MPx ranging in duration from 30s to 120s. The dataset also provides a set of synthetic data using accurate models of the real cameras, enabling investigations of additional tasks. The laser scanner point cloud provided in our data set is accurately preprocessed using a commercial SLAM algorithm to provide a denser point cloud ground truth for tasks such as depth estimation and Visual SLAM, as shown in Figure 5. In terms of recognition tasks, we provide labels for forty classes, the distribution of the main classes is shown in Figure 6. Note, that for the purposes of display in this paper, we have merged some of the classes in Figure 6 (e.g. two_wheelers is a merge of bicycles and motorcycles).

Figure 6: Distribution of instances of semantic classes in WoodScape. Min size of instance is 300 pixels.

3.3 Dataset Design

The design of a dataset for machine learning is a very complex task. Unfortunately, due to the overwhelming success of deep learning, recently it does not get as much attention as it still deserves in our opinion. Deep learning was shown to be quite resistant to label noise 

[41]

. However, at the same time, it was shown that careful inspection of the training sets for outliers improves the robustness of deep neural networks 

[34], especially with regards to the adversarial examples. Therefore, we believe that whenever a new dataset is released, there should be a significant effort spend not only on the data acquisition but also on the careful consistency check and on the database splitting for the needs of training, model selection and testing. In this sub-section, we describe the efforts done on the design of the WoodScape.

Sampling strategy: Let us define some notation and naming conventions, which we will refer to first (we follow the definitions provided in [4]). A population

is a set of all existing feature vectors. A subset of the population collected during some process is called a

sample set . A representative set is significantly smaller than , while capturing most of the information from (compared to any different subset of the same size), and has low redundancy among the representatives it contains.

In an ideal world, we would like our training set to be equal to . This is extremely difficult to achieve in practice and we strive to get as close as possible. There are several ways of accomplishing this. One such approach is the concept of the minimal consistent subset of a training set, where, given a training set , we are interested in a subset , being the smallest set such that , where

denotes the selected accuracy measure (e.g. the Jaccard index). Note, that computation of accuracy implies having the ground truth labels. The purpose is to reduce the size of the training set by removing non-informative samples, which do not contribute to improving the learned model, and therefore put some ease on the annotation efforts.

There are several ways of obtaining . One frequently used approach is instance selection [37, 33, 24]

. There are two main groups of instance selection: wrappers and filters. The wrapper based methods use a selection criterion based on the constructed classifier’s accuracy. Filter based methods, on the other hand, use a selection criterion which is based on an unrelated selection function. The concept of a minimal consistent subset is crucial for our setup, where data is obtained by recording image data from video cameras. Collecting frames at a frame rate of

fps, particularly at low speeds, ultimately leads to significant image overlap, therefore, having an effective sampling strategy to distill the dataset is critical.

Data splitting and class balancing: After collection of images an instance selection algorithm is applied to remove redundancy. The dataset is split into three chunks in ratio of , namely training, validation, and testing. For classical algorithms, all the data can be used for testing. As the names suggest, the training part will serve for training purposes only, the validation part can be either joined with the training set (e.g. when the sought model does not require hyper-parameter selection) or be used for model selection, and finally, the testing set is used for model evaluation purposes only. The dataset supports correct hypothesis evaluation [52], therefore multiple splits are provided ( in total). Depending on the particular task (see Section 4, for the full list), the class imbalance may be an issue [17], therefore, task-specific splits are also provided. Full control of the splitting mechanism is provided allowing for each class to be represented equally within each split (i.e. stratified sampling).

GDPR challenges: The recent General Data Protection Regulation (GDPR) regulation in Europe has given rise to challenges in making our data publicly available. More than one third of our dataset is recorded in Europe and is therefore GDPR sensitive due to visible faces of pedestrians and license plates. There are several ways to deal with this issue. The typical approach is blurring of the sensitive regions in the image (i.e. human faces and license plates). However, such methods are unacceptable for machine learning or image processing algorithms in general. The blurred region is potentially removing valuable information, which may be crucial for the specific algorithm to work with the desired precision. Another, more appropriate, approach is to exchange these regions by some automatically generated proposals. For example, exchanging all faces for automatically generated ones222www.thispersondoesnotexist.com [25]. Such anonymized data will not violate GDPR rules as the original person is no longer identifiable. We aim to ensure that all instances of human faces and license plates are modified via this technique with the goal of negligible impact on the accuracy.

4 Tasks, Metrics and Baseline experiments

Due to limited space, we briefly describe the metrics and baseline experiments for each task and they are summarized in Table 2. Test dataset for each task consists of 30% of the respective number of annotated samples listed in Table 1. Code will be shared via GitHub and sample video results are shared in supplementary material.

4.1 Semantic Segmentation

Semantic Segmentation networks have been successfully trained directly on fisheye images in [10, 43]. Due to absence of fisheye datasets, they make use of artificially warped images of Cityscapes for training and testing was performed on fisheye images. However, the artificial images cannot increase the originally captured FOV. Our semantic segmentation dataset provides pixel-wise labels for object categories, comparatively CityScapes dataset [8] provides for example. Figure 6 illustrates the distribution of main classes. We use ENet [38] to generate our baseline results. We fine-tune their model for our dataset by training with categorical cross entropy loss and Adam [28] optimizer. We chose Intersection over Union (IoU) metric [14] to report the baseline results shown in Table 2. We acheive a mean IoU of on our test set. Figure 7 shows sample results of segmentation on fisheye images from our test set. The four camera images are treated the same without any normalization, however it would be interesting to explore customization of the model for each camera. The dataset also provides instance segmentation labels to explore training of panoptic segmentation models [32].

4.2 2D Bounding Box Detection

Our D object detection dataset is obtained by extracting bounding boxes from instance segmentation labels for different object categories including pedestrians, vehicles, cyclist and motorcyclist. We use Faster R-CNN [40] with ResNet101 [18]

as encoder. We initialize the network with ImageNet 

[9] pre-trained weights. We fine-tune our detection network by training on both KITTI [16] and our object detection datasets. Performance of D object detection is reported in terms of mean average precision (mAP) when IoU between predicted and ground truth bounding boxes. We achieve a mAP score of which is significantly less than the accuracy achieved in other datasets. This was expected as bounding box detection is a difficult task on fisheye as the orientation of objects in the periphery of images is very different from central region. To quantify this better, we tested a pre-trained network for person class, and a poor mAP score of was achieved compared to our dataset trained value of . Sample results of the fisheye trained model is illustrated in Figure 7. We observe that it is necessary to incorporate the fisheye geometry explicitly and it is an open research problem.

Figure 7: Sample baseline results of Segmentation using ENet (top) and Object detection using Faster RCNN (bottom)

4.3 Camera Soiling Detection

width=0.5 Task Model Metric Value Segmentation ENet IoU 51.4 2D Bounding Box Faster R-CNN mAP (IoU>0.5) 31 Soiling Detection ResNet10* Category (%) 84.5 Severity (%) 81 Depth Estimation Eigen* RMSE 7.7 Motion Segmentation MODNet IoU 45 Visual Odometry ResNet50* Translation (<5mm) 51 Rotation (<0.1°) 71 Visual SLAM LSD SLAM* Relocalization (%) 61 3D Bounding Box Detection - Complex YOLO Metric for Training AP (%) AOS (%) Runtime (ms) 3D-IoU 64.38 85.60 95 62.46 88.43 1

Table 2: Summary of results of baseline experiments. (* customized models)

The task of soiling detection was to our best knowledge first defined in [53]. Unlike the front camera which is behind the windshield, the surround view cameras are usually directly exposed to the adverse environmental conditions. Therefore, one cannot avoid a situation when e.g. a splash of mud or other kind of dirt hits the camera. Another, even more common example would be a heavy rain when the water drops frequently hit the camera lens surface. As the functionality of visual perception degrades significantly, detection of soiled cameras is necessary for achieving higher levels of automated driving. As it is a novel task, we discuss it in more detail below.

We treat the camera soiling detection task as a mixed multilabel-categorical classification problem, i.e. we are interested in a classifier, which jointly classifies a single image with a binary indicator array, where each or corresponds to missing or present class, respectively and simultaneously assigns a categorical label. The classes to detect are . Typically, opaque soiling arises from mud and dust (Figure 10 right image), and transparent soiling arises from water and ice (Figure 10

left image). However, in practice it is common to see water producing “opaque” regions in the camera image. The categories are one-hot encoded interval ranges of the soiling severity

. The interval ranges are beginning exclusive and ending inclusive. First category consists of a completely clean image. For example, the label vector corresponds to an image containing the transparent class only with severity in the range of .

Annotation for k images is performed by drawing polygons to separate soiled from unsoiled regions, so that it can be modeled as a segmentation task if necessary. We evaluate the soiling classifier’s performance via an example-based accuracy measure for each task separately, i.e. the average Jaccard index of the testing set: , where denotes the label for the -th testing sample, and denotes the classifier’s prediction. denotes the cardinality of the testing set and the length of the label vector. We use a small baseline network (ResNet10 encoder + -layer decoder) and achieved a precision of for the multilabel classification, and for the severity classification.

4.4 3D Bounding Box Detection

3D box annotation is provided for 10k frames with 3 classes namely pedestrian, vehicles and cyclists. In general, 3D IoU [16]

is used to evaluate 3D bounding box predictions, but there are drawbacks, especially for rotated objects. Two boxes can reach a good 3D IoU score, while overlapping in total with an opposite heading. Additionally, an exact calculation in 3D space is a time consuming task. To avoid those problems, we introduce a new evaluation metric called Scaling-Rotation-Translation score (

SRTs). SRT is based on the idea that two non-shared 3D boxes could be transformed easily against each other by using independent rigid transformations: translation , rotation and scaling . Hence, is composed by:

where denotes size ratios in , , directions, determines the difference of the yaw angles and defines the Euclidean distance between two box centers. is calculated with respect to the size of the two objects based on the length of the diagonals of both objects that are used to calculate two radii . Based on the penalty term we define the full metric by:

and can be used to prioritize individual properties (e.g. size, angle). For our baseline experiments we used , and , and to add more weight to the angle, because our experiments have shown that translation or scaling is easier to learn. For baseline, we trained Complex-YOLO [47] for a single class (cars). We repeated training two times, first optimized on 3D-IoU [16] and second optimized on using a fixed 50:50 split for training and validation. For comparison, we present 3D-IoU, orientation and runtime following  [16] on moderate difficulty, see Table  2. Runtime is the average runtime of all box comparisons for each input during training. Even though this comparison uses 3D-IoU, we achieve similar performance for average precision (3D-IoU), with better angle orientation similarity (AOS) and much faster computation time.

4.5 Monocular Depth Estimation

Monocular Depth estimation is an important task for detecting generic obstacles. We provide more than 100k images of all four cameras (totaling 400k) using ground truth provided by LiDAR. Figure 1 shows a colored example where blue to red indicates the distance for the front camera. As the depth obtained is sparse, we also provide denser point cloud based on SLAM’d static scenes as shown in Figure 5. The ground truth 3D points are projected onto the camera images using our proposed model discussed in Section 2.1. We also apply occlusion correction to handle difference in perspective of LiDAR and camera similar to the method proposed in [31]. We run the semi-supervised approach from [31] using the model proposed by Eigen [12] as baseline on our much larger dataset, see Table  2.

4.6 Motion Segmentation

In automotive, motion is a strong cue due to ego-motion of the cameras on the moving vehicle and dynamic objects around the vehicle are the critical interacting agents. Additionally, it is helpful to detect generic objects based on motion cues rather than appearance cues as there will always be rare objects like kangaroos or construction trucks. This has been explored in [46, 54, 45, 21] for narrow angle cameras. In our dataset, we provide motion masks annotation for moving classes such as vehicles, pedestrians and cyclists for over 10k images. We also provide previous and next images for exploring multi-stream models like MODNet [46]. Our motion segmentation annotation was done using a semi-automated approach. Velocity vectors for each segment were computed using LiDAR and ego-motion provided by GNSS/IMU. Each segment is then classified as moving or static and manually verified. Motion segmentation is treated as a binary segmentation problem and IoU is used as the metric for comparison. Using MODNet as baseline network, we achieve an IoU of 45.

Figure 9: Visual SLAM baseline results (left) based on raw fisheye images (right)
Figure 8: Soiling annotation
Figure 9: Visual SLAM baseline results (left) based on raw fisheye images (right)
Figure 10: Synthetic images modelling fisheye optics
Figure 8: Soiling annotation

4.7 Visual Odometry/SLAM

Visual Odometry (VO) is necessary for creating a map from the objects detected. We make use of our GNSS and IMU to provide annotation in centimetre level accuracy. The ground truth contains all the six degrees of freedom upto scale and the metric used is percentage of frames within a tolerance level of translation and rotation error. Robustness could be added to the visual odometry by performing a joint estimation from all four cameras. As far as the authors are aware, there is no work done on multi-camera VO. We provide 50 video sequences comprising of over 100k frames with the ground truth. The video sequences can also be used for Visual SLAM where we focus on relocalization of a mapped trajectory and the metric is same as VO. We use a fisheye adapted LSD-SLAM

[13] as our baseline model as illustrated in Figure 10 and accuracy figures are provided in Table 2.

4.8 Synthetic Data Domain Transfer

Synthetic data is crucial for autonomous driving for many reasons. Firstly, it provides a mechanism to do rigorous corner case testing for diverse scenarios and use cases. Secondly, there are restrictions like recording videos of a child and thus have to be simulated. Finally, synthetic data is the only way to obtain dense depth and optical flow annotation. There are several popular synthetic datasets like SYNTHIA [42] and CARLA [11]. We will provide synthetic version of our fisheye surround view dataset modelling optics and camera model as shown in Figure 10. The main goal is to explore domain transfer of tasks from synthetic to realistic domain. Although this is not a task by itself, it can enable new tasks like adverse weather detection.

4.9 End-to-end steering/braking

Bojarski et al. demonstrated end-to-end learning [2] for steering and recently it was applied to fisheye cameras [51]

. Although this approach is not mature for deployment, it can be viewed as a redundant parallel model for safety. In the current approach, perception is independently designed and it is probably a more complex intermediate problem to solve than what is needed for a small action space driving task. Thus we have added end-to-end steering and braking tasks to encourage modular end-to-end architectures and to explore optimized perception for the control task. The latter is analogous to hand-eye co-ordination of human drivers where perception is optimized for driving.

5 Conclusions

In this paper, we provide an extensive multi-camera fisheye dataset for autonomous driving with annotation for nine tasks, as well as additional sensor data. We hope that the release of the dataset encourages development of native fisheye models instead of warping fisheye images and applying standard models. In case of deep learning algorithms, it can help understand whether spatial distortion can be learned or it has to be explicitly modeled. In future work, we plan to explore and compare various methods of undistortion and explicit incorporation of fisheye geometry in CNN models.

Acknowledgement

We would like to thank our colleagues including Nivedita, Mihai, Philippe, Jose and Pantelis who have supported the creation of the dataset. We would also like to thank our partner MightyAI for providing high-quality semantic segmentation annotation services.

References

  • [1] J. P. Barreto. Unifying image plane liftings for central catadioptric and dioptric cameras. Imaging Beyond the Pinhole Camera, pages 21––38, 2006.
  • [2] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
  • [3] W. Bond. A wide angle lens for cloud recording. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 44(263):999–1001, 1922.
  • [4] T. Borovicka, M. J. Jr., P. Kordik, and M. Jirina. Selecting representative data sets. In A. Karahoca, editor, Advances in Data Mining Knowledge Discovery and Applications, chapter 2. IntechOpen, Rijeka, 2012.
  • [5] D. Caruso, J. Engel, and D. Cremers. Large-scale direct slam for omnidirectional cameras. In Proceedinsg of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 141–148. IEEE, 2015.
  • [6] S. Chennupati, G. Sistu, S. Yogamani, and S. Rawashdeh. Auxnet: Auxiliary tasks enhanced semantic segmentation for automated driving. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP), pages 645–652, 2019.
  • [7] B. Coors, A. Paul Condurache, and A. Geiger. SphereNet: Learning spherical representations for detection and classification in omnidirectional images. In Proceedings of the European Conference on Computer Vision (ECCV), pages 518–533, 2018.
  • [8] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele.

    The cityscapes dataset for semantic urban scene understanding.

    In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 3213–3223, 2016.
  • [9] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248–255. Ieee, 2009.
  • [10] L. Deng, M. Yang, Y. Qian, C. Wang, and B. Wang. Cnn based semantic segmentation for urban traffic scenes using fisheye camera. In 2017 IEEE Intelligent Vehicles Symposium (IV), pages 231–236. IEEE, 2017.
  • [11] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun. CARLA: An open urban driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, pages 1–16, 2017.
  • [12] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE international conference on computer vision, pages 2650–2658, 2015.
  • [13] J. Engel, T. Schöps, and D. Cremers. Lsd-slam: Large-scale direct monocular slam. In European conference on computer vision, pages 834–849. Springer, 2014.
  • [14] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The PASCAL visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2):303–338, 2010.
  • [15] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research, 32(11):1231–1237, 2013.
  • [16] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
  • [17] X. Guo, Y. Yin, C. Dong, G. Yang, and G. Zhou. On the class imbalance problem. In Proceedings of the Fourth International Conference on Natural Computation (ICNC), pages 192–201, 2008.
  • [18] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
  • [19] M. Heimberger, J. Horgan, C. Hughes, J. McDonald, and S. Yogamani. Computer vision in automated parking systems: Design, implementation and challenges. Image and Vision Computing, 68:88–101, 2017.
  • [20] T. J. Herbert. Area projections of fisheye photographic lenses. Agricultural and Forest Meteorology, 39(2-3):215–223, 1987.
  • [21] J. Huang, W. Zou, Z. Zhu, and J. Zhu. An efficient optical flow based motion detection method for non-stationary scenes. arXiv preprint arXiv:1811.08290, 2018.
  • [22] X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, and R. Yang. The ApolloScape dataset for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 954–960, 2018.
  • [23] C. Hughes, P. Denny, E. Jones, and M. Glavin. Accuracy of fish-eye lens models. Applied Optics, 49(17):3338–3347, 2010.
  • [24] N. Jankowski and M. Grochowski. Comparison of instances selection algorithms I. Algorithms survey. In

    Proceedings of the 7th International Conference on Artificial Intelligence and Soft Computing ICAISC

    , pages 598–603, 2004.
  • [25] T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks. CoRR, abs/1812.04948, 2018.
  • [26] B. Khomutenko, G. Garcia, and P. Martinet. An enhanced unified camera model. IEEE Robotics and Automation Letters, 1(1):137–144, 2016.
  • [27] H. Kim, J. Jung, and J. Paik. Fisheye lens camera based surveillance system for wide field of view monitoring. Optik, 127(14):5636–5646, 2016.
  • [28] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization, 2014.
  • [29] I. Kokkinos.

    UberNet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5454–5463, 2017.
  • [30] Z. Kukelova, J. Heller, M. Bujnak, and T. Pajdla. Radial distortion homography. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 639–647, 2015.
  • [31] V. R. Kumar, S. Milz, C. Witt, M. Simon, K. Amende, J. Petzold, S. Yogamani, and T. Pech.

    Near-field depth estimation using monocular fisheye camera: A semi-supervised learning approach using sparse LiDAR data.

    In CVPR Workshop, 2018.
  • [32] Q. Li, A. Arnab, and P. H. Torr. Weakly-and semi-supervised panoptic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 102–118, 2018.
  • [33] H. Liu and H. Motoda. On issues of instance selection. Data Mining and Knowledge Discovery, 6(2):115–130, 2002.
  • [34] Y. Liu, J. Chen, and H. Chen. Less is more: Culling the training set to improve robustness of deep neural networks. In

    In Proceedings of the 9th International Conference on Decision and Game Theory for Security (GameSec)

    , pages 102–114, 2018.
  • [35] W. Maddern, G. Pascoe, C. Linegar, and P. Newman. 1 year, 1000 km: The Oxford RobotCar dataset. The International Journal of Robotics Research, 36(1):3–15, 2017.
  • [36] G. Neuhold, T. Ollmann, S. Rota Bulo, and P. Kontschieder. The Mapillary Vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 4990–4999, 2017.
  • [37] J. A. Olvera-López, J. A. Carrasco-Ochoa, J. F. Martínez Trinidad, and J. Kittler. A review of instance selection methods. Artificial Intelligence Review, 34(2):133–143, 2010.
  • [38] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello. ENet: A deep neural network architecture for real-time semantic segmentation, 2016.
  • [39] R. Pless. Using many cameras as one. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2003.
  • [40] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pages 91–99, 2015.
  • [41] D. Rolnick, A. Veit, S. J. Belongie, and N. Shavit. Deep learning is robust to massive label noise. CoRR, abs/1705.10694, 2017.
  • [42] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez. The SYNTHIA Dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3234–3243, 2016.
  • [43] A. Sáez, L. M. Bergasa, E. Romeral, E. López, R. Barea, and R. Sanz. Cnn-based fisheye image real-time semantic segmentation. In 2018 IEEE Intelligent Vehicles Symposium (IV), pages 1039–1044. IEEE, 2018.
  • [44] D. Schmalstieg and T. Hollerer. Augmented reality: principles and practice. Addison-Wesley Professional, 2016.
  • [45] M. Siam, M. Gamal, M. Abdel-Razek, S. Yogamani, and M. Jagersand. Rtseg: Real-time semantic segmentation comparative study. In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018.
  • [46] M. Siam, H. Mahgoub, M. Zahran, S. Yogamani, M. Jagersand, and A. El-Sallab. MODNet: Motion and appearance based moving object detection network for autonomous driving. In Proceedings of the 21st International Conference on Intelligent Transportation Systems (ITSC), pages 2859–2864, 2018.
  • [47] M. Simon, S. Milz, K. Amende, and H. Gross. Complex-yolo: Real-time 3d object detection on point clouds. CoRR, abs/1803.06199, 2018.
  • [48] G. Sistu, I. Leang, S. Chennupati, S. Milz, S. Yogamani, and S. Rawashdeh. NeurAll: Towards a unified model for visual perception in automated driving. arXiv preprint arXiv:1902.03589, 2019.
  • [49] Y.-C. Su and K. Grauman. Kernel transformer networks for compact spherical convolution. arXiv preprint arXiv:1812.03115, 2018.
  • [50] M. Teichmann, M. Weber, M. Zöllner, R. Cipolla, and R. Urtasun. MultiNet: Real-time joint semantic reasoning for autonomous driving. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), pages 1013–1020, June 2018.
  • [51] M. Toromanoff, E. Wirbel, F. Wilhelm, C. Vejarano, X. Perrotton, and F. Moutarde. End to end vehicle lateral control using a single fisheye camera. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3613–3619, 2018.
  • [52] M. Uřičář, D. Hurych, P. Křížek, and S. Yogamani. Challenges in designing datasets and validation for autonomous driving. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,, pages 653–659. INSTICC, SciTePress, 2019.
  • [53] M. Uřičář, P. Křížek, D. Hurych, I. Sobh, S. Yogamani, and P. Denny. Yes, We GAN: Applying adversarial techniques for autonomous driving. CoRR, abs/1902.03442, 2019.
  • [54] J. Vertens, A. Valada, and W. Burgard. Smsnet: Semantic motion segmentation using deep convolutional neural networks. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, 2017.
  • [55] R. W. Wood. Fish-eye views, and vision under water. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 12(68):159–162, 1906.
  • [56] F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, and T. Darrell. Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687, 2018.
  • [57] The nuScenes dataset. https://www.nuscenes.org, 2018.