The repository containing tools and information about the WoodScape dataset.
Fisheye cameras are commonly employed for obtaining a large field of view in surveillance, augmented reality and in particular automotive applications. In spite of its prevalence, there are few public datasets for detailed evaluation of computer vision algorithms on fisheye images. We release the first extensive fisheye automotive dataset, WoodScape, named after Robert Wood who invented the fisheye camera in 1906. WoodScape comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection. Semantic annotation of 40 classes at the instance level is provided for over 10,000 images and annotation for other tasks are provided for over 100,000 images. We would like to encourage the community to adapt computer vision models for fisheye camera instead of naive rectification.READ FULL TEXT VIEW PDF
The repository containing tools and information about the WoodScape dataset.
Fisheye lenses provide a large field of view (FOV) using a highly non-linear mapping instead of the standard perspective projection. However, it comes at the cost of strong radial distortion. Fisheye cameras are so-named because they relate to the view of the world that a fish has observing the water surface from below, a phenomenon known as Snell’s window. Robert Wood originally coined the term in 1906 , and constructed a basic fisheye camera by taking a pin-hole camera and filling it with water. It was later replaced with a hemispherical lens . To pay homage to the original inventor and coiner of the term “fisheye”, we have named our dataset WoodScape.
Large FOV cameras are necessary for various computer vision application domains including video surveillance , augmented reality  and have been of particular interest in autonomous driving . In automotive, rear-view fisheye cameras are commonly deployed in existing vehicles for dashboard viewing and reverse parking. While commercial autonomous driving systems typically make use of narrow FOV forward facing cameras at present, full perception is now investigated for handling more complex use cases. In spite of this growing interest, there is relatively little literature and datasets available. Some examples of the few datasets that have fisheye are: Visual SLAM ground truth for indoor scenes with omni-directional cameras in , SphereNet  containing 1200 labelled images of parked cars using cameras (not strictly fisheye) and, in automotive, the Oxford Robotcar dataset  containing a large scale relocalization dataset with fisheye cameras.
WoodScape is a comprehensive dataset for sensing around a vehicle using the four fisheye cameras shown in Figure 2. It aims at complementing the range of already existing automotive datasets where only narrow FOV image data is present: among those, KITTI  was the first pioneering dataset with a variety of tasks, which drove a lot of research for autonomous driving; Cityscapes  provided the first comprehensive semantic segmentation dataset and Mapillary  provided a significantly larger dataset; Apolloscape  and BDD100k  are more recent datasets that push the annotation scale further. WoodScape is unique in that it provides fisheye image data, along with a comprehensive range of annotation types. A comparative summary of these different datasets is provided in Table 1. The main contributions of WoodScape are as follows:
First fisheye dataset comprising of over 10,000 images containing instance level semantic annotation.
Four-camera nine-task dataset designed to encourage unified multi-task and multi-camera models.
Introduction of a novel soiling detection task and release of first dataset of its kind.
Proposal of an efficient metric for the 3D box detection task which improves training time by 95x.
The paper is organized as follows. Section 2 provides an overview of fisheye camera model, undistortion methods and fisheye adaption of vision algorithms. Section 3 discusses the details of the dataset including goals, capture infrastructure and dataset design. Section 4 presents the list of supported tasks and baseline experiments. Finally, Section 5 summarizes and concludes the paper.
Fisheye cameras offer a distinct advantage for automotive applications. Given their extremely wide field of view, they can observe the full surrounding of a vehicle with a minimal number of sensors, with just four cameras typically being required for full 360 coverage (Figure 2). This advantage comes with some drawbacks in the significantly more complex projection geometry that fisheye cameras exhibit. That is, images from fisheye cameras display severe distortion, which is evidenced by all of the sample images used in this paper.
Typical camera datasets consist of narrow FOV camera data where a simple pinhole projection model is commonly employed. In case of fisheye camera images, it is imperative that the appropriate camera model is well understood either to handle distortion in the algorithm or to warp the image prior to processing. This section is intended to highlight to the reader that the fisheye camera model requires specific attention. We provide a brief overview and references for further details, and discuss the merits of operating on the raw fisheye versus linearization of the image prior to processing.
Fisheye distortion is modelled by a radial mapping function , where is the distance on the image from the centre of distortion, and is a function of the angle of the incident ray against the optical axis of the camera system namely . Stereographic projection  is the simplest model which uses a mapping from a sphere to a plane. More recent projection models are Unified Camera Model (UCM) [1, 5] and eUCM (Enhanced UCM) . More detailed analysis of accuracy of various projection models is discussed in detail in . These models are not a perfect fit for fisheye cameras as they encode a specific geometry, and errors arising in the model are compensated by using an added distortion correction component.
In WoodScape, we provide a more generic fisheye intrinsic calibration that is independent of any specific projection model, and does not require the added step of distortion correction. Our model is based on a fourth order polynomial mapping incident angle to image radius in pixels (). In our experience, higher orders provide no additional accuracy.
As a comparison, to give the reader an understanding of how different models behave, Figure 3 shows the mapping function for five different projection models, which are Polynomial, Rectilinear, Stereographic, UCM and eUCM. The parameters of the 4th order polynomial are taken from a calibration of our fisheye lens. We optimized the parameters for the other models to match this model in a range of 0 to 120. The plot indicates that the difference to the original fourth order polynomial is about four pixels for UCM and one pixel for eUCM for low incident angles. For larger incident angles, these models are less precise.
Standard computer vision models do not generalize easily to fisheye cameras because of large non-linear distortion. For example, translation invariance is lost for a standard CNN. The naïve way to develop algorithms for fisheye cameras is to perform rectilinear correction so that standard models can be applied. The simplest linearization is to re-warp pixels to a rectilinear image as shown in Figure 4 (a). But there are two major issues. Firstly, the FOV is greater than 180, hence there are rays incident from behind the camera and it is not possible to establish a complete mapping to a rectilinear viewport. This leads to a loss of FOV, this is seen via the missing yellow pillars in the corrected image. Secondly, there is an issue of resampling distortion, which is more pronounced near the periphery of the image where a smaller region gets mapped to a larger region.
The missing FOV can be resolved by multiple linear viewports as shown in Figure 4 (b). However there are issues in the transition region from one plane to another. This can be viewed as a piecewise linear approximation of the fisheye lens manifold. Figure 4 (c) demonstrates a quasi-linear correction using a cylindrical viewport, where it is linear in vertical direction and straight vertical objects like pedestrians are preserved. However, there is a quadratic distortion along the horizontal axis. In many scenarios, it provides a reasonable trade-off but it still has limitations. In case of learning algorithms, a parametric transform can be optimized for optimal performance of the target application accuracy.
Because of fundamental limitations of linearization, an alternate approach of adapting the algorithm incorporating fisheye model could be an optimal solution. In case of classical geometric algorithms, an analytical version of non-linear projection can be incorporated. For example, Kukelova et al. 
extend homography estimation by incorporating radial distortion model. In case of deep learning algorithms, a possible solution could be to train the CNN model to learn the distortion. However, the translation invariance assumption of CNN fundamentally breaks down due to spatially variant distortion and thus it is not efficient to let the network learn it implicitly. This had led to many adaptations of CNN to handle spherical images such as and . However, spherical models do not provide an accurate fit for fisheye lenses and it is an open problem.
Fisheye: One of the main goals of this dataset is to encourage the research community to develop vision algorithms natively on fisheye images without undistortion. There are very few public fisheye datasets and none of them provide semantic segmentation annotation. Fisheye is particularly beneficial to automotive low speed manoeuvring scenarios such as parking where accurate full coverage near field sensing can be achieved with just four cameras.
Multi-camera: Surround view systems have at least four cameras rigidly connected to the body of the car. Pless  did pioneering work in deriving a framework for modeling a network of cameras as one, this approach is useful for geometric vision algorithms like visual odometry. However, for semantic segmentation algorithms, there is no literature on joint modeling of rigidly connected cameras.
Multi-task: Autonomous driving has various vision tasks and most of the work has been focused on solving individual tasks independently. However, there is a recent trend [29, 50, 48, 6] to solve tasks using a single multi-task model to enable efficient reuse of encoder features and also provide regularization while learning multiple tasks. However, in these cases, only the encoder is shared and there is no synergy among decoders. Existing datasets are primarily designed to facilitate task-specific learning and they don’t provide simultaneous annotation for all the tasks. We have designed our dataset so that simultaneous annotation is provided for various tasks where possible.
Our diverse dataset originates from three distinct geographical locations: USA, Europe, and China. While the majority of data was obtained from saloon vehicles there is a significant subset from a sports utility vehicle ensuring a strong mix in sensor mechanical configurations. Driving scenarios are divided across the highway, urban driving and parking use cases. Intrinsic and extrinsic calibrations are provided for all sensors as well as timestamp files to allow synchronization of the data. Relevant vehicle’s mechanical data (e.g. wheel circumference, wheel base) are included. High-quality data is ensured via quality checks at all stages of the data collection process. Annotation data undergoes a rigorous quality assurance by highly skilled reviewers. The sensors recorded for this dataset are listed below:
4x 1 Mpx RGB fisheye cameras ( hFOV)
1x LiDAR rotating at 20Hz (Velodyne HDL-64E)
1x GNSS/IMU (NovAtel Propak6 & SPAN-IGM-A1)
1x GNSS Positioning with SPS (Garmin 18x)
Automotive grade Radar and Ultrasonics
Odometry signals from the vehicle bus
|GPS & IMU||
|2D Bounding Box||Classes||3||-||-||-||-||10||7|
|3D Bounding Box||Classes||3||-||-||25||1||-||3|
Our WoodScape dataset provides labels for several autonomous driving tasks including semantic segmentation, monocular depth estimation, object detection (2D & 3D bounding boxes), Visual Odometry, Visual SLAM, motion segmentation, soiling detection and end-to-end driving (driving controls). In Table 1, we compare several properties of popular datasets against WoodScape. In addition to providing fisheye data, we provide data for many more tasks than is typical (nine in total), providing completely novel tasks such as soiled lens detection. Images are provided at 1MPx resolution and videos are uncompressed at 30fps 1MPx ranging in duration from 30s to 120s. The dataset also provides a set of synthetic data using accurate models of the real cameras, enabling investigations of additional tasks. The laser scanner point cloud provided in our data set is accurately preprocessed using a commercial SLAM algorithm to provide a denser point cloud ground truth for tasks such as depth estimation and Visual SLAM, as shown in Figure 5. In terms of recognition tasks, we provide labels for forty classes, the distribution of the main classes is shown in Figure 6. Note, that for the purposes of display in this paper, we have merged some of the classes in Figure 6 (e.g. two_wheelers is a merge of bicycles and motorcycles).
The design of a dataset for machine learning is a very complex task. Unfortunately, due to the overwhelming success of deep learning, recently it does not get as much attention as it still deserves in our opinion. Deep learning was shown to be quite resistant to label noise34], especially with regards to the adversarial examples. Therefore, we believe that whenever a new dataset is released, there should be a significant effort spend not only on the data acquisition but also on the careful consistency check and on the database splitting for the needs of training, model selection and testing. In this sub-section, we describe the efforts done on the design of the WoodScape.
Sampling strategy: Let us define some notation and naming conventions, which we will refer to first (we follow the definitions provided in ). A population
is a set of all existing feature vectors. A subset of the population collected during some process is called asample set . A representative set is significantly smaller than , while capturing most of the information from (compared to any different subset of the same size), and has low redundancy among the representatives it contains.
In an ideal world, we would like our training set to be equal to . This is extremely difficult to achieve in practice and we strive to get as close as possible. There are several ways of accomplishing this. One such approach is the concept of the minimal consistent subset of a training set, where, given a training set , we are interested in a subset , being the smallest set such that , where
denotes the selected accuracy measure (e.g. the Jaccard index). Note, that computation of accuracy implies having the ground truth labels. The purpose is to reduce the size of the training set by removing non-informative samples, which do not contribute to improving the learned model, and therefore put some ease on the annotation efforts.
. There are two main groups of instance selection: wrappers and filters. The wrapper based methods use a selection criterion based on the constructed classifier’s accuracy. Filter based methods, on the other hand, use a selection criterion which is based on an unrelated selection function. The concept of a minimal consistent subset is crucial for our setup, where data is obtained by recording image data from video cameras. Collecting frames at a frame rate offps, particularly at low speeds, ultimately leads to significant image overlap, therefore, having an effective sampling strategy to distill the dataset is critical.
Data splitting and class balancing:
After collection of images an instance selection algorithm is applied to remove redundancy. The dataset is split into three chunks in ratio of , namely training, validation, and testing. For classical algorithms, all the data can be used for testing. As the names suggest, the training part will serve for training purposes only, the validation part can be either joined with the training set (e.g. when the sought model does not require hyper-parameter selection) or be used for model selection, and finally, the testing set is used for model evaluation purposes only. The dataset supports correct hypothesis evaluation , therefore multiple splits are provided ( in total). Depending on the particular task (see Section 4, for the full list), the class imbalance may be an issue , therefore, task-specific splits are also provided. Full control of the splitting mechanism is provided allowing for each class to be represented equally within each split (i.e. stratified sampling).
GDPR challenges: The recent General Data Protection Regulation (GDPR) regulation in Europe has given rise to challenges in making our data publicly available. More than one third of our dataset is recorded in Europe and is therefore GDPR sensitive due to visible faces of pedestrians and license plates. There are several ways to deal with this issue. The typical approach is blurring of the sensitive regions in the image (i.e. human faces and license plates). However, such methods are unacceptable for machine learning or image processing algorithms in general. The blurred region is potentially removing valuable information, which may be crucial for the specific algorithm to work with the desired precision. Another, more appropriate, approach is to exchange these regions by some automatically generated proposals. For example, exchanging all faces for automatically generated ones222www.thispersondoesnotexist.com . Such anonymized data will not violate GDPR rules as the original person is no longer identifiable. We aim to ensure that all instances of human faces and license plates are modified via this technique with the goal of negligible impact on the accuracy.
Due to limited space, we briefly describe the metrics and baseline experiments for each task and they are summarized in Table 2. Test dataset for each task consists of 30% of the respective number of annotated samples listed in Table 1. Code will be shared via GitHub and sample video results are shared in supplementary material.
Semantic Segmentation networks have been successfully trained directly on fisheye images in [10, 43]. Due to absence of fisheye datasets, they make use of artificially warped images of Cityscapes for training and testing was performed on fisheye images. However, the artificial images cannot increase the originally captured FOV. Our semantic segmentation dataset provides pixel-wise labels for object categories, comparatively CityScapes dataset  provides for example. Figure 6 illustrates the distribution of main classes. We use ENet  to generate our baseline results. We fine-tune their model for our dataset by training with categorical cross entropy loss and Adam  optimizer. We chose Intersection over Union (IoU) metric  to report the baseline results shown in Table 2. We acheive a mean IoU of on our test set. Figure 7 shows sample results of segmentation on fisheye images from our test set. The four camera images are treated the same without any normalization, however it would be interesting to explore customization of the model for each camera. The dataset also provides instance segmentation labels to explore training of panoptic segmentation models .
Our D object detection dataset is obtained by extracting bounding boxes from instance segmentation labels for different object categories including pedestrians, vehicles, cyclist and motorcyclist. We use Faster R-CNN  with ResNet101 
as encoder. We initialize the network with ImageNet pre-trained weights. We fine-tune our detection network by training on both KITTI  and our object detection datasets. Performance of D object detection is reported in terms of mean average precision (mAP) when IoU between predicted and ground truth bounding boxes. We achieve a mAP score of which is significantly less than the accuracy achieved in other datasets. This was expected as bounding box detection is a difficult task on fisheye as the orientation of objects in the periphery of images is very different from central region. To quantify this better, we tested a pre-trained network for person class, and a poor mAP score of was achieved compared to our dataset trained value of . Sample results of the fisheye trained model is illustrated in Figure 7. We observe that it is necessary to incorporate the fisheye geometry explicitly and it is an open research problem.
The task of soiling detection was to our best knowledge first defined in . Unlike the front camera which is behind the windshield, the surround view cameras are usually directly exposed to the adverse environmental conditions. Therefore, one cannot avoid a situation when e.g. a splash of mud or other kind of dirt hits the camera. Another, even more common example would be a heavy rain when the water drops frequently hit the camera lens surface. As the functionality of visual perception degrades significantly, detection of soiled cameras is necessary for achieving higher levels of automated driving. As it is a novel task, we discuss it in more detail below.
We treat the camera soiling detection task as a mixed multilabel-categorical classification problem, i.e. we are interested in a classifier, which jointly classifies a single image with a binary indicator array, where each or corresponds to missing or present class, respectively and simultaneously assigns a categorical label. The classes to detect are . Typically, opaque soiling arises from mud and dust (Figure 10 right image), and transparent soiling arises from water and ice (Figure 10
left image). However, in practice it is common to see water producing “opaque” regions in the camera image. The categories are one-hot encoded interval ranges of the soiling severity. The interval ranges are beginning exclusive and ending inclusive. First category consists of a completely clean image. For example, the label vector corresponds to an image containing the transparent class only with severity in the range of .
Annotation for k images is performed by drawing polygons to separate soiled from unsoiled regions, so that it can be modeled as a segmentation task if necessary. We evaluate the soiling classifier’s performance via an example-based accuracy measure for each task separately, i.e. the average Jaccard index of the testing set: , where denotes the label for the -th testing sample, and denotes the classifier’s prediction. denotes the cardinality of the testing set and the length of the label vector. We use a small baseline network (ResNet10 encoder + -layer decoder) and achieved a precision of for the multilabel classification, and for the severity classification.
3D box annotation is provided for 10k frames with 3 classes namely pedestrian, vehicles and cyclists. In general, 3D IoU 
is used to evaluate 3D bounding box predictions, but there are drawbacks, especially for rotated objects. Two boxes can reach a good 3D IoU score, while overlapping in total with an opposite heading. Additionally, an exact calculation in 3D space is a time consuming task. To avoid those problems, we introduce a new evaluation metric called Scaling-Rotation-Translation score (SRTs). SRT is based on the idea that two non-shared 3D boxes could be transformed easily against each other by using independent rigid transformations: translation , rotation and scaling . Hence, is composed by:
where denotes size ratios in , , directions, determines the difference of the yaw angles and defines the Euclidean distance between two box centers. is calculated with respect to the size of the two objects based on the length of the diagonals of both objects that are used to calculate two radii . Based on the penalty term we define the full metric by:
and can be used to prioritize individual properties (e.g. size, angle). For our baseline experiments we used , and , and to add more weight to the angle, because our experiments have shown that translation or scaling is easier to learn. For baseline, we trained Complex-YOLO  for a single class (cars). We repeated training two times, first optimized on 3D-IoU  and second optimized on using a fixed 50:50 split for training and validation. For comparison, we present 3D-IoU, orientation and runtime following  on moderate difficulty, see Table 2. Runtime is the average runtime of all box comparisons for each input during training. Even though this comparison uses 3D-IoU, we achieve similar performance for average precision (3D-IoU), with better angle orientation similarity (AOS) and much faster computation time.
Monocular Depth estimation is an important task for detecting generic obstacles. We provide more than 100k images of all four cameras (totaling 400k) using ground truth provided by LiDAR. Figure 1 shows a colored example where blue to red indicates the distance for the front camera. As the depth obtained is sparse, we also provide denser point cloud based on SLAM’d static scenes as shown in Figure 5. The ground truth 3D points are projected onto the camera images using our proposed model discussed in Section 2.1. We also apply occlusion correction to handle difference in perspective of LiDAR and camera similar to the method proposed in . We run the semi-supervised approach from  using the model proposed by Eigen  as baseline on our much larger dataset, see Table 2.
In automotive, motion is a strong cue due to ego-motion of the cameras on the moving vehicle and dynamic objects around the vehicle are the critical interacting agents. Additionally, it is helpful to detect generic objects based on motion cues rather than appearance cues as there will always be rare objects like kangaroos or construction trucks. This has been explored in [46, 54, 45, 21] for narrow angle cameras. In our dataset, we provide motion masks annotation for moving classes such as vehicles, pedestrians and cyclists for over 10k images. We also provide previous and next images for exploring multi-stream models like MODNet . Our motion segmentation annotation was done using a semi-automated approach. Velocity vectors for each segment were computed using LiDAR and ego-motion provided by GNSS/IMU. Each segment is then classified as moving or static and manually verified. Motion segmentation is treated as a binary segmentation problem and IoU is used as the metric for comparison. Using MODNet as baseline network, we achieve an IoU of 45.
Visual Odometry (VO) is necessary for creating a map from the objects detected. We make use of our GNSS and IMU to provide annotation in centimetre level accuracy. The ground truth contains all the six degrees of freedom upto scale and the metric used is percentage of frames within a tolerance level of translation and rotation error. Robustness could be added to the visual odometry by performing a joint estimation from all four cameras. As far as the authors are aware, there is no work done on multi-camera VO. We provide 50 video sequences comprising of over 100k frames with the ground truth. The video sequences can also be used for Visual SLAM where we focus on relocalization of a mapped trajectory and the metric is same as VO. We use a fisheye adapted LSD-SLAM as our baseline model as illustrated in Figure 10 and accuracy figures are provided in Table 2.
Synthetic data is crucial for autonomous driving for many reasons. Firstly, it provides a mechanism to do rigorous corner case testing for diverse scenarios and use cases. Secondly, there are restrictions like recording videos of a child and thus have to be simulated. Finally, synthetic data is the only way to obtain dense depth and optical flow annotation. There are several popular synthetic datasets like SYNTHIA  and CARLA . We will provide synthetic version of our fisheye surround view dataset modelling optics and camera model as shown in Figure 10. The main goal is to explore domain transfer of tasks from synthetic to realistic domain. Although this is not a task by itself, it can enable new tasks like adverse weather detection.
. Although this approach is not mature for deployment, it can be viewed as a redundant parallel model for safety. In the current approach, perception is independently designed and it is probably a more complex intermediate problem to solve than what is needed for a small action space driving task. Thus we have added end-to-end steering and braking tasks to encourage modular end-to-end architectures and to explore optimized perception for the control task. The latter is analogous to hand-eye co-ordination of human drivers where perception is optimized for driving.
In this paper, we provide an extensive multi-camera fisheye dataset for autonomous driving with annotation for nine tasks, as well as additional sensor data. We hope that the release of the dataset encourages development of native fisheye models instead of warping fisheye images and applying standard models. In case of deep learning algorithms, it can help understand whether spatial distortion can be learned or it has to be explicitly modeled. In future work, we plan to explore and compare various methods of undistortion and explicit incorporation of fisheye geometry in CNN models.
We would like to thank our colleagues including Nivedita, Mihai, Philippe, Jose and Pantelis who have supported the creation of the dataset. We would also like to thank our partner MightyAI for providing high-quality semantic segmentation annotation services.
The cityscapes dataset for semantic urban scene understanding.In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3213–3223, 2016.
Proceedings of the 7th International Conference on Artificial Intelligence and Soft Computing ICAISC, pages 598–603, 2004.
UberNet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5454–5463, 2017.
Near-field depth estimation using monocular fisheye camera: A semi-supervised learning approach using sparse LiDAR data.In CVPR Workshop, 2018.
In Proceedings of the 9th International Conference on Decision and Game Theory for Security (GameSec), pages 102–114, 2018.