Near real-time map building with multi-class image set labelling and classification of road conditions using convolutional neural networks

01/27/2020
by   Sheela Ramanna, et al.
The University of Winnipeg
16

Weather is an important factor affecting transportation and road safety. In this paper, we leverage state-of-the-art convolutional neural networks in labelling images taken by street and highway cameras located across across North America. Road camera snapshots were used in experiments with multiple deep learning frameworks to classify images by road condition. The training data for these experiments used images labelled as dry, wet, snow/ice, poor, and offline. The experiments tested different configurations of six convolutional neural networks (VGG-16, ResNet50, Xception, InceptionResNetV2, EfficientNet-B0 and EfficientNet-B4) to assess their suitability to this problem. The precision, accuracy, and recall were measured for each framework configuration. In addition, the training sets were varied both in overall size and by size of individual classes. The final training set included 47,000 images labelled using the five aforementioned classes. The EfficientNet-B4 framework was found to be most suitable to this problem, achieving validation accuracy of 90.6 half the execution time. It was observed that VGG-16 with transfer learning proved to be very useful for data acquisition and pseudo-labelling with limited hardware resources, throughout this project. The EfficientNet-B4 framework was then placed into a real-time production environment, where images could be classified in real-time on an ongoing basis. The classified images were then used to construct a map showing real-time road conditions at various camera locations across North America. The choice of these frameworks and our analysis take into account unique requirements of real-time map building functions. A detailed analysis of the process of semi-automated dataset labelling using these frameworks is also presented in this paper.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 8

page 13

10/06/2017

Intelligent Pothole Detection and Road Condition Assessment

Poor road conditions are a public nuisance, causing passenger discomfort...
11/29/2021

iLabel: Interactive Neural Scene Labelling

Joint representation of geometry, colour and semantics using a 3D neural...
02/05/2021

Fusion of neural networks, for LIDAR-based evidential road mapping

LIDAR sensors are usually used to provide autonomous vehicles with 3D re...
10/03/2019

LabelSens: Enabling Real-time Sensor Data Labelling at the point of Collection on Edge Computing

In recent years, machine learning has made leaps and bounds enabling app...
05/09/2021

Slash or burn: Power line and vegetation classification for wildfire prevention

Electric utilities are struggling to manage increasing wildfire risk in ...
01/17/2019

FARSA: Fully Automated Roadway Safety Assessment

This paper addresses the task of road safety assessment. An emerging app...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Adverse road conditions present a frequent hazard to motorists. In cold climates, snow, ice, and frost can produce slippery roads, while the reduced friction from wet roads is a hazard in both warm and cold climates. Data from 2010-2018 in the United States showed that on average 767,779 crashes per year (13% of the total) occur during adverse weather conditions (rain, snow, sleet, freezing rain, or hail). In addition, an average of 2,747 fatalities per year (9% of all fatalities) occurred during times of adverse weather conditions111https://cdan.dot.gov/query, retrieved on Jan. 4, 2020. Advances have been made in better monitoring roads during hazardous weather conditions. Road Weather Information Systems (RWIS) can provide real-time road weather information at point locations, which is often used to produce road weather forecast(e.g. [1, 2]). This data is then transmitted to transportation operations centers and disseminated to the public through services such as the 511 network [3]. Many RWIS are also equipped with cameras, which give a real-time view of the road. While the information provided by RWIS and cameras is useful, there are still limitations. Since these systems are operated at the state/province or local level, there is no unified road information system. Therefore, motorists must consult different sources for road weather information in each jurisdictions where they travel. Due to the cost of such systems, not all jurisdictions have RWIS/cameras and those that do often have a limited number. This can introduce large gaps in road weather information. These gaps are sometimes filled by manual observations from operators, or have no data at all. Since cameras are much more prevalent than RWIS, and less expensive, they may present an opportunity to improve road weather data where there is currently limited data. In addition, since camera images are readily available and come in common formats, they can be sourced across all jurisdictions and combined into a unified system. However, combining all such cameras is a major task, since there are tens of thousands in North America alone. Furthermore, as noted by Carrillo et al. [4] it is challenging for operators to process vast amounts of road weather data in real time.

Early work involving road condition classification from weather data involved cameras mounted on vehicles and primarily used in vehicle navigation [5, 6, 7, 8, 9, 10, 11, 12, 13, 14]

. Some of these methods use image processing techniques such as extracting regions of interest (ROI) from the images or road segmentation. Histogram features derived from the ROIs can then be used with classical machine learning methods such as Support Vector Machines (SVM) to label the weather/road conditions into various categories such as sunny, cloudy and rainy. Road condition estimation based on this spatio-temporal approach to model wet road surface conditions, that integrates over many frames, was explored 

[15]. Weather recognition from general outdoor images was explored in [16, 17, 18, 19, 20] to name a few.

The success of deep convolutional neural networks(DCNN) [21, 22]

, in computer vision tasks 

[23, 24] and the generation of large weather datasets [25, 26, 27], led to their application in weather recognition problems [28, 29, 26, 30, 31, 27]. Automatic fog detection with DCNNs using H20 platform222https://www.h2o.ai/ to predict the presence of dense fog from daytime camera images has been implemented with several sets of images collected by Royal Netherlands Meteorological Institute(KNMI)333https://www.wmo.int/pages/prog/www/IMOP/documents/O4_1_Pagani_etal_ExtendedAbstract.pdf. A near-real-time geographical map showing the predicted values of the cameras is also given with promising results. In [32], 5000 images from highway sections in Ontario, Canada captured by smartphones were used to classify road surface conditions using a pre-trained VGG-16 model. For a five-class road surface classification, the best accuracy with DCNN was 78.5%. In [33], two DCNNS were applied to differentiate six classes of road surface conditions such as cobblestone, wet asphalt, snow, grass, dirt and asphalt with an eventual goal of predicting the road friction coefficient. This study augmented their dataset with data from publicly available datasets for automated driving, which led to a classification accuracy of 92%. Classification of road surface condition with deep learning models was explored by Carillo et al. [4]

where six state-of-the-art DCNN models were pre-trained using ImageNet parameters to classify to road surface condition images (about 16,800) from roadside cameras in Ontario, Canada.

The research goals explored in this paper are as follows: i) to leverage state-of the art DCNN’s in labelling images taken by street and highway cameras located across Canada and the United States (see Figure 1), ii) to evaluate multiple DCNN models for classification of road conditions, and iii) to construct a real-time map of North-America depicting road conditions.

Figure 1: Sample raw images from street and road cameras representing (a) Dry (b) Offline, (c) Poor, (d) snow and (g) wet categories

The major contributions of this work are as follows: i) a methodology outlining steps for generating a multi-class dataset of images of road conditions in North America from road cameras, ii) a map building system, and iii) detailed analysis of the process of semi-automated dataset labelling using well-known deep learning frameworks. It is also noteworthy that the best classification accuracy result of 90.9% was achieved without performing any pre-processing on the images (such as noise removal, text/logo removal, histogram equalization, cropping) other than rescaling.

Our paper is organized as follows: In Section 2, we briefly review research related to weather classification. In Section 3, an overview of the map building application pipeline. In addition, we give an overview of the various convolutional network architectures used in this paper. In Section 4, we present a detailed discussion of the process of labelling raw images and generating training examples. In Section 5, we give classification results and illustrate the final map building exercise. Lastly, we give concluding remarks in Section 6.

2 Related Works

Research related to weather classification using deep learning is presented here. Related works are grouped into different categories that share similar methods.

  1. Vehicle navigation from in-vehicle cameras using hand-crafted features:
    In [5], raindrops formed on the windshield of vehicles were used to detect rainy weather where eigendrops representing the principal components were extracted from the raindrop images. In [7, 8], various histogram features were extracted from the captured images. These features were used as inputs to well-known machine learning classifiers such as SVM or AdaBoost to classify the images into classes such as sunny, cloudy and rainy [8], or more granular categories such as clear weather, light rain or heavy rain [7]. Another important weather condition besides rain, is fog. In [6], a single camera image was captured consisting of road and sky elements, with the intention of detecting day-time fog and the computation of the meteorological visibility distance for vehicle navigation. In a later paper, [11], a night-fog detection system was developed using two methods that detect fog in presence of road traffic lights, or public lighting, using multipurpose cameras. In  [9], a real-time fog detection system was developed using a combination of image processing techniques such as Sobel filtering (to detect blurry images), road/sky segmentation and visibility distance calculation. Another day-time fog detection method using global features in terms of the power spectrum by training different scaled and oriented Gabor filters, was presented in [12]. A SVM classifier with an RBF kernel was used for classification. In [13], multiple features such as sky, shadow, rain streak, snowflakes, dark channel, contrast and saturation were extracted from an outdoor data set consisting of 20K images. These images were classified based on shared dictionaries of weather conditions and a SVM classifier.

  2. Weather recognition from outdoor images:
    In [17], a photometric stereo-based method was proposed to estimate weather conditions using multiple popular tourist site images from the internet. In [16], a physics-based model was developed to capture multiple scattering of light rays as they travel from a source to an observer, for various weather conditions including fog, haze, mist and rain. In [18]

    , the weather recognition problem was viewed as a dynamic scene changing scenario where several transient attributes such as lighting, weather, and seasons were defined and used to annotate thousands of images with perceived scene properties. Support vector regression and logistic regression methods were used to train different non-linear predictors. In 

    [19, 20]

    , classifiers such as k-nearest neighbour and decision trees were used to detect the weather conditions in outdoor images using global -features such as power spectral slope, edge gradient energy, contrast, saturation, and noisy images. In 

    [14], the authors first segmented road surface images to obtain a ROI showing different weather conditions from bare dry to snow packed. This segmentation method relied on contextual information to define the vanishing point of the road and horizon line. These images were then classified into five classes (dry, wet, snow, ice, packed) using a standard SVM classifier with an RBF kernel. This method resulted in an accuracy of 86% for binary classification (bare vs. snow or ice-covered)

  3. Weather recognition with features derived from CNNs:
    In [28], a CNN was trained using ImageNet to classify weather images. In [25], a collaborative learning approach using novel weather features to label a single outdoor image as either sunny or cloudy, was proposed. A CNN was used to extract features which were then fed to an SVM framework to obtain individual weather features. In addition, a data augmentation scheme was used to enrich the training data. In [26], multi-class benchmark dataset containing six common categories for sunny, cloudy, rainy, snowy, haze, and thunder weather was created. A region selection and concurrency (RSCM) was proposed to detect visual concurrency on region pairs of weather categories. This model was tested using a deep-learning framework. In [29], features of extreme weather and recognition models were generated from a large-scale extreme weather dataset in which 16635 extreme weather images with complex scenes were divided into four classes (sunny, rainstorm, blizzard, and fog). A pre-trained ILSVRC-2012 dataset was used in conjunction with GoogLeNet to fine-tuned their dataset. In [30]

    , an open source rain-fog-snow (RFS) dataset of images was created. A novel algorithm where super-pixel delimiting masks as a form of data augmentation was proposed. A CNN was used to extract features from the augmented images, which were then used to train an SVM classifier. In 

    [31], deep convolution generative adversarial networks (DCGAN) were used to generate images to balance, three benchmark (imbalanced) datasets of weather images. The CNN model was then applied directly to classify the images. In [27]

    , weather recognition was treated as a multi-label classification task where an image was assigned to more than one label according to weather conditions. The authors also used a CNN to extract the most correlated visual weather features. A long-short-term-memory version of the recurrent neural network (RNN) was used to model dependencies amongst weather classes.


3 Proposed Deep Learning Pipeline for Near Real-time Map Generation

In this section, we provide an overview of the pipeline for map generation using deep learning frameworks.

3.1 Application Pipeline

Our approach is to use CNNs to perform end-to-end classification where raw images are fed directly into the CNNs to classify road conditions. The proposed pipeline for near real-time road condition classifier consists of four modules: image acquisition, image classification, database submission and map generation. The process is summarized in Figure 2. In order to reduce the overall latency, these modules are implemented in the way that they can run in an overlapping fashion. Each stage processes inputs as they emerge from the previous stage.

Image Acquisition

: The first stage is the acquisition of the input images on which we perform the road condition classifications. These images are snapshots taken by street and highway cameras located across Canada and the United States. They are downloaded over the internet by sending snapshot queries to public camera APIs. For this task, we rely on a pre-assembled catalogue containing a unique camera identifier, the snapshot URL and the geographic location for each camera of interest. As the images are downloaded periodically to the computer vision server, they are passed along to the classification module for further processing. The speed of this module is practically bound to the network bandwidth and it can be executed in multiple threads.

Image Classification

: This is the core module performing the machine learning tasks. It contains the deep learning classifier with pre-trained weights. This module monitors the set of incoming images and are checked for corruption and integrity. If they are good, they are resized to the expected input dimensions of the classifier and fed into it in batches. The output is written into a catalogue of label records on the local disk in form of camera identifiers, time stamps, inferred label and the geographic location tags.

Database Submission:

Database Submission module monitors the label records generated by the image classification module. As they come in, they are retrieved and sent to a remote database server for further processing. This module is decoupled from the image classification task to avoid any delays.

Map Building

: This module monitors the database for emerging records and fetches them to maintain an output map on which icons indicating the road conditions are super-imposed on their respective geo-locations, for visual representation.

Figure 2: A summary of near real-time road condition classification process.

3.2 Operational Deep Learning Frameworks

In this work, we considered a variety of deep learning frameworks. The following sections give a brief overview of the various architectures used in this paper.

3.2.1 Visual Geometry Group - VGG

Developed by Simonyan and Zisserman [34]

, VGG was the runner-up at the ILSVRC 2014 (ImageNet Large Scale Visual Recognition Competition). It is one of the earliest networks which showed, that using small convolutional filters with a deeply layered architecture can produce successful results. VGG has a deep feed-forward architecture with no residual connections. This is formed by linearly connected convolutional layers with max-pooling after every second or third layer, with two fully-connected layers at the end. The architecture is summarized in Table 

1.

Layer

Kernel Size / Stride

Output Size
Input
conv-block1 stride = 1 ()
maxpool1 stride = 2 ()
conv-block2 stride = 1 ()
maxpool2 stride = 2 ()
conv-block3 stride = 1 ()
maxpool3 stride = 2 ()
conv-block4 stride = 1 ()
maxpool4 stride = 2 ()
conv-block5 stride = 1 ()
maxpool5 stride = 2 ()

fc-relu

()
fc-softmax ()
Table 1: VGG Architecture Layers used in this work

3.2.2 Residual Neural Network- ResNet

Developed by Kaiming He et al. [35], the ResNet architecture introduces a solution to the network depth-accuracy degradation problem. This is done by deploying shortcut connections between one or more layers of convolutional blocks that perform identity mapping, which are called residual connections. This allows the construction of a deeper network that is easier to optimize, compared to a counterpart deep network based on unreferenced mapping. ResNet won first place in the ILSVRC 2015 classification competition. For this work, a 178-layer deep version of ResNet50 is customized for our classification experiments. The last fully connected layer is removed and replaced by a drop-out layer followed by a fully connected layer. The ResNet architecture is given in Table 2.

Layer Kernel Size / Stride Output Size
Input
conv1 , 64, stride 2 ()
maxpool , stride 2 ()
conv2_x [, 64 , 64 , 256] ()
conv3_x [, 128 , 128 , 512] ()
conv4_x [, 256 , 256 , 1024] ()
conv5_x [, 512 , 512 , 2048] ()
global_avg_pooling 2048
dropout(rate = 0.2) 2048
fc_softmax 5
Table 2: ResNet Architecture Layers used in this work

3.2.3 InceptionResNetV2

InceptionResNetV2 is an integration of residual connections into deep inception network [36]. The model achieved lower error with top-1 and top-5 error rates compared to batch normalizaion-Inception, Inception-v3, Inception-Resnet-v1 and Inception-v4. In our experiments: i) the input images were rescaled to , ii) top layer was removed and replaced by a dropout layer with dropout rate of 0.4, and iii) with a softmax fully connected layer for the 5 classes. Details of the network are shown in Table 3.

Layer Kernel Size / Stride Output Size
Input
stem [conv3, 32/2 V] ()
[conv3, 32 V] ()
[conv3, 64] ()
[maxpool3x3, 2 V conv3 , 96/2 V] ()
[conv1, 64 conv3, 96 V
conv1, 64 conv7x1, 64 conv1x7, 64 conv3, 96 V] ()
[maxpool, 2 V conv3 , 192 V] ()
inceptionresnet(a) 5 5 [conv1, 32
conv1, 32 conv3, 32
conv1, 32 conv3, 48 conv3, 64 ]
[conv1, 384] + ReLu ()
reduction(a) [ maxpool3x3, 2 V conv3, 384 2 V
conv1, 256 conv3, l conv3, 384 2 V ] ()
inceptionresnet(b) 10 10 [conv1, 192
conv1, 128 conv1x7, 160 conv7x1, 192]
[conv1, 1154] + ReLu ()
reduction(b) [maxpool3x3, 2 V conv1, 256 conv3, 384 2 V
conv1, 256 conv3, 288 2 V
conv1, 256 conv3, 288 conv3, 320 2V] ()
inceptionresnet(c) 5 [conv1, 192
conv1, 192 conv1x3, 224 conv3x1, 256]
[conv1, 2048] + ReLu ()
avgpool 1792
dropout(rate = 0.4) 1792
fc_softmax 5
Table 3: InceptionResNetV2 Architecture Layers used in this work

3.2.4 Extreme Inception - Xception

The Xception network was introduced by Francois Chollet [37]

where Inception modules were replaced by depthwise separable convolutions with residual connections. In the Xception architecture, the data goes through an entry flow, then a middle flow and finally an exit flow. This process is repeated eight times. To adapt this network for our task: i) we removed the top layers and replaced them with a dropout layer, and ii) replaced the fully connected layer with softmax for the 5 classes of road conditions. Table  

4 shows the architecture of Xception network used in this work.

Layer Kernel Size / Stride Output Size
Input
entry [conv3, 32/2, ReLu conv3, 64, ReLu]
[conv1, 2] [sepconv3, 128 ReLu, sepconv3, 128
maxpool3x3, 2]
[conv1, 2] [ReLu, sepconv3, 256 ReLu, sepconv3, 256
maxpool3x3, 2]
[conv1, 2] [ReLu, sepconv3, 728 ReLu, sepconv3, 728
maxpool3x3, 2] ()
middle [ReLu, sepconv3, 728 ReLu, sepconv3, 728
ReLu, sepconv3, 728] ()
exit [conv1, 2] [ReLu, sepconv3, 728 ReLu, sepconv3, 1024
maxpool3x3, 2]
[sepconv3, 1536, ReLu sepconv3, 2048, ReLu
avgpool] 2048
dropout(rate = 0.2) 2048
fc_softmax 5
Table 4: XCeption Architecture layers used in this work

3.2.5 EfficientNet

EfficientNet developed by Mingxing Tan and Quoc V. Le [38], introduced a compound scaling method to scale up all three ConvNets dimensions, namely, width, depth and resolution, to achieve more accuracy and efficiency. For this work, we used the baseline model EfficientNet-B0 and EfficientNet-B4. In order to apply transfer learning, we replaced the top layer with a dropout layer followed by a softmax fully-connected layer for the 5 classes. The network architecture is shown in Table 5.

Layer Kernel Size / Stride Output Size
Input
conv3 ()
MBconv1 ()
MBconv6 ()
MBconv6 ()
MBconv6 ()
MBconv6 ()
MBconv6 ()
MBconv6 ()
conv1 ()
pooling 1280
dropout 1280
fc_softmax 5
Table 5: EfficientNet Architecture layers used in this work

EfficientNet-B4 is a scaled-up version of the baseline network EfficientNet-B0 by using user-specified coefficient and constants , , , which are found by grid search. The latter network width, depth and resolution are determined by , and , respectively, under constraint of .

4 Methodology: Dataset Acquisition and Labelling

A major part of this project was data acquisition, labelling and experimentation. One of main challenges in this work was to label millions of raw images of road and weather conditions variety of scenery (urban, rural), sky condition (clear, overcast), illumination (day, night, twilight) and quality to produce a reliable set of training images (shown in Figure 1. Another challenge was take into account model complexity and memory usage, in addition to classification accuracy during the various stages of the dataset labelling and classification process.

In this section, we discuss the extensive work done and explain how we proceeded in an incremental manner, from initial data collection to the final set of training examples. The overall process is summarized in Figure  3. These phases mirror some practical problems faced by the team with access to a set of live camera feeds from real-time road images collected in the month of March 2019. The cameras span many locations across Canada and the United States depicting a wide range of road and weather conditions. The main objective was to prepare a reliable set of labelled images for training the deep learning frameworks. The following metrics were used in this work:

with the usual interpretation where TP stands for True Positives, FP stands for False Positives, TN stands for True Negatives and FN stands for False Negatives.

Figure 3: An overview of the data preparation and experimentation phases.

The first challenge was to determine a set of (tentative) output labels for the road conditions. Table 6 summarizes the possible classes of images inferred visually. It is noteworthy that the vast majority of the images emerged to be dry whereas the other images spanned a variety of adverse conditions. We decided to start simple and gradually increase the number of classes. As a result, we decided to start with a binary classification task with the following classes:

Dry:

This class represented the seemingly ideal dry conditions found in a typical dry day or night images.

Non-dry:

This class represented non-ideal road conditions such as wet, snow, slush and ice.

Road Condition Id Title # Images Content (Inferred Visually)
1 Dry 13,429 Mostly dry, some wet, snow
2 Moist 288 Mostly damp or wet, some dry
3 Moist (treated) 93 Mostly damp or wet, some dry
4 Wet 1,290 Mostly wet, some slush,snow, dry
5 Wet (treated) 106 Mostly wet, some slush, snow, dry
6 Ice 641 Wet, some snow, slush, dry
7 Frost 1 Seemingly dry
8 Snow 305 Mostly snow, some dry, wet
15 Dew 201 Mostly wet, some rainy, some dry
18 Slush 0 N/A
16 Black ice warning 6 Seemingly dry
21 Unknown 3 Wet snowflakes on road
22 Unknown 1 Light snowflakes on road
23 Unknown 108 Mostly heavier snow or slush
24 Unknown 213 Seemingly assorted
Table 6: Labelling of images by road condition sensors located near the cameras.

4.1 Phase 1 - The Two-Class Problem

4.1.1 Introduction

In an attempt to find an alternative to manual labelling images, we tried using road condition observations from RWIS that were located near cameras. The RWIS data from departments of transportation (DOT) across North America are transmitted to the Meteorological Assimilation Data Ingest System (MADIS)444https://madis.ncep.noaa.gov/sfc_notes.shtml#note17. From MADIS, we retrieved RWIS road condition observations that were located within 10 km of the camera and used them to the associated camera image. Many cameras were not close enough to an RWIS for this technique to work, but nevertheless it could reduce manual effort in numerous cases.

In this phase, the goal was to manually prepare a two-class balanced training set using 352,240 unlabelled images. In this process, roughly 10-20 % of the samples were visually verifiable, so useful samples had to be cherry-picked. As can be seen from Table 6, many of the classes were under-sampled and therefore unusable. Nonetheless, they could be accommodated together under the generic non-dry class. By doing some cherry-picking on categories 1, 4, 5, 6, 7, 8, 16, 18, 21, 22, 23, 24 we extracted 1785 assorted non-dry samples consisting of wet, snowy, slushy and icy images. After that, we matched this class with an equal number of dry samples. Finding dry images was easy since they were abundant. Both classes span a variety of scenery (urban, rural), sky conditions (clear, overcast), illumination (day, night, twilight) and quality. We randomly split the data into training and validation sets for two classes as shown in Table 7.

Total Train Validation
Dry 1785 1585 200
Non-dry (Wet/Snow) 1785 1585 200
Overall 3570 3170 400
Table 7: First 2-class data set

4.1.2 Two-Class Experiments on 3K Data Set

Once we put together our first labelled data set, we considered two DCNN architectures for our first round of experiments.

VGG-16:

This architecture was chosen for a number of reasons: i) demonstrably successful on a variety of image classification tasks, ii) has a feed-forward architecture with no residual connections which makes it a good baseline, iii) has native support in Keras

555https://keras.io/

and its model with weights are publicly available, and iv) was computationally feasible. VGG has different flavours but a popular one, which has 13 convolutional and pooling layers and 3 fully connected layers, is called VGG-16. We used its Tensorflow implementation with Keras. We took the original VGG-16 classifier with 1000 classes, discarded the final fully-connected layer and appended a new one with 2 neurons activated via the softmax function. We used the default ImageNet weights. We set every layer but the final one non-trainable. It’s second-to-last layer is fully connected with 4096 neurons which means we ended up with 4096 x 2 (weights) + 2 (bias) = 8194 trainable parameters. All the previous 134,260,544 parameters were left frozen.

ResNet-50:

This architecture uses residual connections to tie non-adjacent layers with the intent of coping with vanishing/exploding gradients during the training process. We started with ResNet-50, a fifty layer deep version of this architecture. As with VGG, we used Keras with Tensorflow. We took the original classifier and configured the end layers for our 2-class problem. The final layer had 2 neurons activated via the softmax function. It’s second-to-last layer was fully connected with 2048 neurons so there were 2048 x 2 (weights) + 2 (bias) = 4098 trainable parameters. All previous layers with 23,587,712 parameters were set to non-trainable.

For both VGG-16 and ResNet50, we used a batch size of 10 and we trained both the networks for 20 epochs with increments of five using our 3170 training samples. Then we tested the models on our 400 validation samples. The classification report and the confusion matrix are presented in Tables 

8 and 9.

ResNet-50 Precision Recall F1-Score Support Accuracy
Dry 0.61 0.96 0.75 200
Non-dry 0.91 0.38 0.53 200
Training 88.7%
Validation 67.0%
VGG-16 Precision Recall F1-Score Support Accuracy
Dry 0.79 0.88 0.83 200
Non-dry 0.86 0.77 0.81 200
Training 94.0%
Validation 82.3%
Table 8: Classification reports for VGG-16 and ResNet50 after epoch 20.
ResNet-50 VGG-16
T \ P Dry Non-dry Dry Non-dry
Dry 193 7 176 24
Non-dry 125 75 47 153
Table 9: Confusion matrices showing true labels vs. predicted labels after epoch 20.

4.1.3 Analysis

Based on our experiments in Phase 1, it can be seen that the classifiers have been able to differentiate between dry and non-dry images, showing promise for the upcoming multi-label classification tasks. Also, VGG-16 outperformed ResNet-50 in terms of overall accuracy and F1 scores. Perhaps unsurprisingly, both frameworks had less difficulty classifying the dry class since it had higher intra-class similarity. On the contrary, the non-dry class spanned a wider range of conditions resulting in a higher intra-class variation.

The main importance of these results is that they showed we can employ modern DCNN frameworks along with transfer learning to achieve non-trivial results. Another interesting thing to note is that these results were achieved without performing any pre-processing on the images (noise removal, text/logo removal, histogram equalization, cropping etc.) other than rescaling. This suggests that modern architectures have the potential to adapt to our scene classification problem. We should also note that throughout the first round of data labelling and experimentation, we identified a number of potential challenges for interpreting the image content. They include:

Resolution:

Images had varying resolutions and aspect ratios. The majority were 320 by 240 but they ranged from 160 by 90 to 2048 by 1536. Modern CNN architectures expect inputs of uniform size so they would be required to resized accordingly. It does, however, mean that the solution we develop would need to be scale-invariant.

Illumunation:

Depending on the time of the day, some images yielded very dark scenes, making it practically impossible to judge the road condition.

Corruption:

We came across some corrupted images containing regions with pixelation and unnatural colours.

Occlusion:

Certain images had objects partly or fully blocking the view.

Superimposed Texts:

Most images contained super-imposed logo and text on the camera view. These would have to be sampled adequately across the classes to prevent our model to use them as features.

Varying Angles:

The angle at which the cameras view the road varies greatly. Some cameras have top-down views, others are almost at eye-level.

Varying Distance:

The distance between the cameras and the roads also vary a lot.

Imbalanced Categories:

Vast majority of the images were the ideal dry condition. Conditions like snow, slush and ice seemed significantly less frequent.

Offline Cameras:

From time to time, cameras show “stream offline" message rather than the actual video feed.

Based on the promising results we achieved with VGG-16 on a simplified two-class configuration, we decided to use the VGG-16 architecture to expand the task to a multi-label (4-class) classification problem.

4.2 Phase 2 - The Four-Class Problem

4.2.1 Introduction

The natural way to convert this into a multi-label problem was to decompose the non-dry class into its constituents. However, the subclasses were very imbalanced in terms of size and some of them had negligible useful content. The subclasses were categorized under two streams of roughly equal sizes: “wet" and “snow". Accordingly, we split the non-dry class with 1785 images into roughly equal wet and snow classes.

In addition, we also observed another particularly interesting group of images. There were thousands of snapshots from offline cameras. These belonged to the cameras for which the video stream from the view sites were not available so the camera sent its own “stream is offline" image. These images would also be present in the production environment and it was important to capture and filter such images. As a result, 677 assorted offline images were extracted and the classification task in this phase used the following classes:

Dry:

This class represented the seemingly ideal dry condition yielded by a typical dry day or night images.

Wet:

This class represented a spectrum of conditions from moist roads to puddles to soaking wet.

Snow:

This class represented harsh winter conditions including snow-covered, slush-covered and ice-covered roads.

Offline:

This class covered the static no-signal feed of the cameras which varied from camera to camera.

Table 10 shows the new data set composition. Note, that at this stage, we were also using different sample distributions per class than the ones used in Section 4.1 with mostly dry images. This is because we also wanted to observe the behaviour of the classifier over an unevenly distributed data set, reflecting the composition of the 352K data set, which in turn reflected the underlying weather conditions across the continent. This was meant to mirror the conditions encountered in a real-time production environment.

Total Train Validation
Dry 1785 1696 89
Offline 677 644 33
Snow 905 860 45
Wet 880 837 43
Overall 4247 4035 210
Table 10: First 4-class data set

4.2.2 Four-Class Experiments on 4K Data Set

For this stage, we decided to repurpose the VGG-16 classifier from Section 4.1 since it showed more promise. We essentially employed the same hyper-parameters as Section 4.1 except that we changed the final layer to include 4 neurons for our four-class setup. We split the labelled data as 95% training and 5% validation. We reserved a smaller portion for validation since we had more classes with less images. After training a VGG-16 classifier for 5 epochs, the training set accuracy was 83.5% and the validation set accuracy was 77.1%. The classification report and the confusion matrix for the validation set are presented in Tables 11 and 12 respectively.

VGG-16 Precision Recall F1-Score Support Accuracy
Dry 0.81 0.78 0.79 89
Offline 0.97 1.00 0.99 33
Snow 0.83 0.78 0.80 45
Wet 0.51 0.58 0.54 43
Training 83.5%
Validation 77.1%
Table 11: Classification report for validation set after epoch 5.
T \ P Dry Offline Snow Wet
Dry 69 0 2 18
Offline 0 33 0 0
Snow 3 1 35 6
Wet 13 0 5 25
Table 12: Confusion matrix showing true labels vs predicted labels for validation set after epoch 5.

4.2.3 Analysis

This round of experimentation resulted in a mixed set of results. The overall accuracy decreased to 77%, although this was still promising since we had twice as many labels and the decomposed wet and snow classes had effectively half as many samples to work with. A detailed class-based analysis is presented below:

Dry:

Dry performed slightly worse than the two-class experiment, mainly due to the fact that the model had difficulty distinguishing between the dry and the wet classes. These classes seemed to have relatively higher inter-class similarity. More training samples were necessary.

Wet:

This class suffered more than the dry class mainly because of its smaller size. We think when these samples were represented together with the snow class, they were easier to tell apart from the dry images because wet and snow actually have many common features such as less visibility and more frequent overcast scenes. These features did not appear to be emphasized as much by the model since they are now associated with multiple classes.

Snow:

This class offered similar results to the two-class experiments, which is impressive considering the fact that it had effectively half as many examples as in Section 4.1. This is most likely because the snow class had more distinctive features compared to the dry and wet class, such as the colour and the texture of the road being much different.

Offline:

The classifier performed exceptionally well (with 100% accuracy) over the new offline category. We think this is because of its distinctive features such as large texts and unnatural colours, which are totally different from a regular road scene.

Overall, the experiments showed that VGG-16 was still able to show a decent performance on a four-class setup, even with a small subset of images that had an uneven training data distribution. Validation results suggest that the highest degree of confusion was between dry and wet categories, whereas the number of mistakes among the other pairs were smaller. Remarkably, the model was exceptionally good at classifying offline images. In the next phase, we explored the ways to increase the size of our labelled data set.

4.3 Phase 3 - The Five-Class Problem

4.3.1 Pseudo-Labelling with VGG

In Phase 3, the goal was to leverage the VGG-16 classifier trained on the four- class experiments and use a semi-supervised learning method on the large 352K data set to “pseudo-label" each image with one of our four classes. This approach was used to assist us in clustering images of similar nature, making the manual cherry-picking stage much easier. We incorporated our 4-label model from Section 

4.2 to classify the 352K images which took under a day on an Intel i7 3630QM machine with 16GB system RAM and NVIDIA GeForce 670MX GPU with 3GB video RAM. Table 13 shows the number of images suggested per class by VGG-16, along with the training and validation data that we provided.

Training & Validation Classification
# Images Train Valid Total VGG-16 Suggestions
Dry 1696 89 1785 142,413
Wet/Moist 837 43 880 99,089
Snow/Slush 860 45 905 70,200
Offline 644 33 677 40,538
Overall 4035 210 4247 352,240
Table 13: Number of images VGG-16 pseudo-labelled per class, as suggestions.

At this stage, we did not use any metrics for the classification accuracy over the 352K set as they were mostly unlabelled data. However, on examining the thumbnails, we observed similar types of classification errors as in previous experiments on the validation set, in the sense that dry and wet samples were being identified incorrectly on dark and overcast imagery, dry winter samples were sometimes getting mistaken for snow, and we were seeing exceptionally good results for the offline class. An overview of the extracted content by means of semi-supervised learning is presented below:

Dry:

Consisted of mostly clear and well-lit scenes with dry and clean pavement. Day and summer-looking images were frequent whereas winter and night images were sparse. There were occasional images with rain or wet/moist pavement, a small number of snowy background images or very sparse snowy road images, a negligible number of offline images. Precision was seemingly good, but recall was not, since dirty, overcast, night and dark pavement images leaked to the wet class and winter images leaked to the snow class.

Wet:

Included a lot of rainy and wet surfaces. In addition, significant portions of the images were overcast dry, fuzzy, foggy, blurry, dark and night scenery, dirty or otherwise unintelligible scenes (not necessarily wet). Infrequent snow and wet-snow images were present. There were a negligible number of offline images. Neither recall nor precision were good, but nevertheless there were sufficient wet scenes to harvest for our purposes.

Snow:

Recall was good, seemingly covering the majority of snowy road scenes across all categories. However, precision was poor. Many wet and dry images appeared under this class, although most of them were winter scenes with snowy background, but clear road. Also, the model confused a certain shade of road pavement and texture with snow. As with the “wet” category, we observed fuzzy, blurry, foggy, dark and night scenery, dirty objective or otherwise illegible scenes (not necessarily snowy). None or negligible offline imagery were present.

Offline:

This was by far the most accurate class. Recall was near-perfect and precision was good. It encapsulated almost all offline imagery along with some almost-black pictures. It learned the offline patterns that it had not been not trained. We saw a very small percentage of other kinds of errors.

4.3.2 Manual Labelling

Training a 4-class VGG-16 network and running it on the entire 352K data set helped us cluster similar images and significantly narrow down sets of potentially interesting images for each class. We were able to manually label approximately 9K dry, 5K wet and 4K snow samples. During this phase, we observed that a huge portion of poorly-lit night images ended up in either snow or wet categories. These images (1108) were very hard to label even by human classifiers and hence led to a new category “poor”. The total number of images resulting from this phase was approximately 20K. Table 14 shows the composition of our 5-class data set with 20K images. The description of the fifth class is as follows:

Poor

: This class contains images that a human was unable to classify, due to various factors including: darkness, poor visibility, blurriness, or uncertainty about the category (e.g. wet vs. ice).

Total Train Validation
Dry 9620 7696 1924
Wet 5012 4009 1003
Snow 4028 3222 806
Offline 676 540 136
Poor/Dark 1108 886 222
Overall 20,444 16,353 4091
Table 14: First 5-Class Data Set.

4.3.3 Five-Class Experiments on 20K Data Set

For this round of experimentation, we decided to retrain the VGG-16 classifier for two reasons. Firstly, because we had more substantial amount of samples for training and we wanted to allow more layers to be tuned. In order to increase the fitting capacity of the model, we planned to train all the fully connected layers along with the last few convolutional layers. However, we came across over a hundred million parameters to be tuned, which our hardware could not handle. Therefore, we needed to reduce the number of parameters by shrinking the fully connected layers. The standard form of VGG-16 contained 4096 neurons per hidden layer which was reduced to 1024. Secondly, we realized we could further diversify our data set and hopefully improve the performance with data augmentation. This technique introduces additional training samples to the data set by transforming and deforming the originals (randomly shifting, rotating, zooming, flipping etc.) as shown in Figure 4. Although new samples would be correlated with the original images, they could counteract over-fitting in some cases as the transformations and deformations challenge the training process. Table 15 summarizes the experimental setup.

Hyper-parameter Value
Number of classes : 5
Number of epochs : 10
Batch size : 16
Optimizer algorithm : Adam
Learning rate : 0.0001
Validation split : 0.2
Base architecture : VGG
Convolutional layers : Default VGG-16
Trainable conv. layers : Last 4 (Conv. Conv. Conv. Max-Pool)
Fully connected layers : ReLU-1024 Dropout-50% Softmax-3
Trainable FC layers : All
Total parameters : 48,474,693
Trainable parameters : 40,839,429
Non-trainable parameters : 7,635,264
Parameter initialization : Default VGG-16 (ImageNet)
Data Augmentation
Width shift range : 0.1
Height shift range : 0.1
Shear range : 0.01
Zoom range : [0,9, 1.0]
Horizontal flip : True
Vertical flip : False
Fill mode : Constant
Brightness range : [0.5, 1.5]
Table 15: Experimental configuration for VGG-16.
Figure 4: Augmented data samples

We used 80-20 split and trained this VGG-16 classifier initialized with ImageNet666http://www.image-net.org/ weights. The results are presented in Tables 16 and 17.

VGG-16 Precision Recall F1-Score Support Accuracy
Dry 0.88 0.90 0.93 1924
Offline 1.00 0.99 0.98 136
Poor 0.91 0.91 0.91 222
Snow 0.92 0.91 0.91 806
Wet 0.87 0.83 0.79 1003
Training 94.1%
Validation 89.0%
Table 16: Classification report for validation set after epoch 5.
T \ P Dry Offline Poor Snow Wet
Dry 1780 0 6 41 97
Offline 0 133 3 0 0
Poor 9 0 202 5 6
Snow 55 0 1 733 17
Wet 181 0 9 19 794
Table 17: Confusion matrix showing true labels vs predicted labels for validation set after epoch 5.

4.3.4 Analysis

Experiments in this phase yielded the best results yet. The F-1 scores for dry, wet, and snow all increased as we quadrupled our labelled data set and incorporated data augmentation. The new “poor" class also performed well, achieving an F-1 score of 91%. The most challenging task was still distinguishing dry and wet images, although it should be noted that we saw a big improvement in the F-1 score for the wet class from 54% to 79% with the additional samples.

Another point worth mentioning is that at the end of this phase, we achieved the lowest deviation so far, between the peak validation accuracy and the underlying training accuracy. This suggests that over-fitting became less of a concern at this point and we got a better representation of the images in our model, extending its performance on the training set to the validation set relatively well. It should be observed that this result was obtained with a cherry-picked manually labelled 20K dataset. During this time, we also had the chance to acquire additional road camera samples beyond our 352K data set. These samples were collected using the same method described in Section 4.1 by randomly sampling cameras across the North America at different times of the day, over a period of three months in all kinds of climate and weather conditions. Combined with our earlier unlabelled images, we were able to generate a 1.5 million-image data set.

We extracted 1000 images randomly and we let our VGG-16 model classify this set. Then we manually interpreted the results. Taking into account the ambiguous and fuzzy cases which could belong to multiple labels, we marked each result as “acceptable" which were most likely (or at least partially) correct or “refused" which were absolute false positives. Table 18 shows these results where the overall classification accuracy was good (88%) for the dry, offline and poor classes. However, the performance on wet and snow categories was unacceptable. There were too many false-positives. Upon further examination, we realized most of them were poor images incorrectly assigned to these classes. Therefore, we decided to diversify our training set with more low-quality samples.

Verdict Dry Offline Poor Snow Wet Total
VGG-16 Acceptable: 616 65 136 9 62 888
20K Refused: 31 0 7 21 53 112
Table 18: Classification judgment for 1000 random images from combined (1.5M) data sets

4.4 Phase 4 - Scaling the Solution

4.4.1 Second Pseudo-Labelling

In this phase, we decided to perform another round of labelling using semi-supervised learning as in Section 4.2. This time, we applied our five-label VGG-16 classifier trained in Section 4.3 with 89% validation accuracy on both 352K data set and the 1.1M data set. Table  19 shows the number of images pseudo-labelled by class.

VGG-89% 352K Set 1.1M Set
Pseudo-Labels # Images % # Images %
Dry 216,614 61.50 748,165 67.16
Poor 49,707 14.11 143,016 12.84
Wet 43,479 12.34 114,700 10.30
Offline 26,827 7.62 91,163 8.18
Snow 15,613 4.43 16,876 1.52
Total 352,240 100.00 1,113,920 100.00
Table 19: VGG-16 pseudo-labelling results the 352K and 1.1M data sets.

For the 352K data set, the number of dry samples increased whereas the others decreased (compared to the results in Table 13). This seemed to be a step in the right direction, since many of the poor samples already migrated to the poor class after it was introduced. Also, in Section  4.2 those classes were suffering from low precision, so having fewer samples appear in those categories was a good sign, suggesting less false positives and a higher precision. In the 1.1M data set, we had a similar distribution, albeit not the same. Here, the dry class samples dominated even more. Also, snow samples were especially rare since those images were captured later in spring.

4.4.2 Manual-Labelling

In our earlier process of generating labelled sets, we used suggestions by the VGG-16 network to cherry-pick high confidence samples and ignored the remaining majority samples which resulted in a high validation accuracy. However, we were still facing many false positives and ambiguous samples. So in this phase, we decided to have all types of images represented in our model for training and validation.

Accordingly, we considered the 1.5 million images with their pseudo-labels and we randomly extracted 4000 samples from each of the five classes. Then we manually labelled them. This time, unlike in Section 4.2, we did not cherry-pick the results. We did not discard any image, but included them in the “poor" class if we were unable to label them with confidence. So now, the poor class represented all types of problematic images including the challenging images discussed in Section 4.1.

To ensure that we did not trivialize our original high confidence cherry-picked training samples by the new randomly extracted images, we also performed another set of cherry-picking, although to a lesser degree. Overall, we supplemented our 20K data set with 20K randomized and 7K cherry-picked samples. Table 20 shows the final labelled data set.

Total Train Validation
Dry 16,065 14,458 1607
Offline 5225 4702 523
Poor 9259 8333 926
Snow 7565 6808 757
Wet 9228 8305 923
Overall 47,342 42,606 4736
Table 20: Final 47K labelled data set with 5 classes.

4.4.3 Five-Class Experiments on 47K Data Set

At this stage of the project, we upgraded our development machine to a Intel i7 9700K CPU, 32GB RAM and NVIDIA GeForce RTX 2080 with 8GB Video RAM. This enabled us to experiment with more advanced frameworks such as InceptionResNetV2 and EfficientNet-B4 in addition to VGG-16. InceptionResNetV2 is a very deep and sophisticated architecture with 55 million parameters spread across 572 layers compared to the VGG-16 model with 48 million parameters over 23 layers (including non-convolutional layers). Introduced recently by Google AI, EfficientNet leverages scalability in multiple dimensions, unlike most other networks. It employs 18M parameters. For all three frameworks, we used our 47K data set via 90-10 training/validation split and data augmentation to maximize the training samples. As we multiplied the size of our data set, this ratio yielded a similar number of validation examples as Section 4.3. We now give the results of experiments with the three frameworks.

VGG-16:

It was trained with our previous hyper-parameters except that we used 1280 neurons in the fully connected layers, which resulted in 40 million parameters to be trained. Table  21 shows the classification report and Table 22 shows the confusion matrix after epoch 7.

VGG-16 Precision Recall F1-Score Support Accuracy
Dry 0.87 0.88 0.87 1607
Offline 0.99 0.98 0.99 523
Poor 0.88 0.84 0.86 926
Snow 0.90 0.85 0.88 757
Wet 0.80 0.85 0.82 923
Training 88.9%
Validation 87.3%
Table 21: Classification report for validation set after epoch 7.
T \ P Dry Offline Poor Snow Wet
Dry 1419 0 53 22 113
Offline 1 511 11 0 0
Poor 67 3 779 31 46
Snow 36 0 35 646 40
Wet 117 0 8 17 781
Table 22: Confusion matrix for VGG-16 showing true labels vs predicted labels for validation set after epoch 7.
InceptionResNetV2:

This model was trained from the original ImageNet weights with all the layers set as trainable. There were a total of 54 million parameters to be trained. There were no modifications to the original model except for the final dense (output) layer. No dropout was applied. Details are presented in Tables 23 and  24.

IRNV2 Precision Recall F1-Score Support Accuracy
Dry 0.90 0.92 0.91 1607
Offline 0.99 0.99 0.99 523
Poor 0.88 0.87 0.88 926
Snow 0.92 0.90 0.91 757
Wet 0.88 0.88 0.88 923
Training 90.8%
Validation 90.7%
Table 23: InceptionResNetV2 Classification report for validation set.
T \ P Dry Offline Poor Snow Wet
Dry 1474 0 67 11 55
Offline 1 519 3 0 0
Poor 58 4 810 18 36
Snow 25 0 30 684 18
Wet 75 0 10 30 808
Table 24: Confusion matrix for InceptionResNetV2 showing true labels vs predicted labels for validation.
EfficientNet-B4:

This model was trained for 4 epochs. Leaky ReLU was used for activation. The results are shown in tables 25 and 26.

EN-B4 Precision Recall F1-Score Support Accuracy
Dry 0.90 0.92 0.91 1607
Offline 0.98 0.99 0.99 523
Poor 0.91 0.84 0.87 926
Snow 0.94 0.92 0.93 757
Wet 0.86 0.92 0.89 923
Training 90.3%
Validation 90.9%
Table 25: EfficientNet-B4 classification report for validation set.
T \ P Dry Offline Poor Snow Wet
Dry 1471 0 56 7 73
Offline 0 520 3 0 0
Poor 75 8 776 23 44
Snow 29 1 17 693 17
Wet 54 0 5 18 846
Table 26: Confusion matrix for EfficientNet-B4 showing true labels vs predicted labels for validation.

4.4.4 Analysis

With 87.3% validation accuracy over 5 classes, VGG-16 maintained a slightly worse validation accuracy on the 47K set than the previous 20K cherry-picked data set. This is noteworthy since this set was much more representative of the unlabelled data set as half of it consisted of unrestricted (non-cherry-picked) images and the other half is strictly cherry-picked images. Also note that the gap between training and validation accuracies is less than 2% showing we are not over-fitting the training data. InceptionResNetV2 achieved an even higher accuracy. Within 4 epochs, it reached 90.7%, beating even the cherry-picked experiments of VGG-16 in Section 4.3. This model also did not suffer from over-fitting as the training accuracy was only 0.1% better than the validation accuracy. EfficientNet-B4 performed slightly better than InceptionResNetV2. After epoch 4, it achieved 90.9% validation accuracy overall, yielding the highest accuracy. It achieved the highest F-1 score for wet and snow, and tied with InceptionResNetV2 for dry and poor classes.

To observe their performance on the unlabelled set, again we labelled the random extract with 1000 images from the 1.5 million data set, described in Section  4.4. Table 27 shows the combined results. For the frameworks trained on the 47K data set, we can see that the number of false positives decreased across all classes. There were also significant improvements for snow and wet classes, compared to the VGG-16 classification on the cherry-picked 20K set. Once again, EfficientNet-B4 proved to the most accurate framework. It achieved the lowest false positives on dry, snow and wet classes. It also achieved the highest true positives on the wet class.

Verdict Dry Offline Poor Snow Wet Total (1000)
VGG-16 Acceptable: 616 65 136 9 62 888
20K Refused: 31 0 7 21 53 112
VGG-16 Acceptable: 599 79 205 8 70 961
47K Refused: 17 0 0 1 21 39
IRN-V2 Acceptable: 587 76 247 10 54 974
47K Refused: 20 0 0 3 3 26
EN-B4 Acceptable: 608 78 217 9 66 978
47K Refused: 18 1 0 1 2 22
Table 27: Summary of classification evaluation for 1000 random images from the combined (1.5M) data set.

Training with more data, diversifying the poor class and using a higher-end classifier with more layers trained reflected on the results very positively. The number of evident false positives significantly decreased overall. In the next section, we’ll perform a comparative analysis of the run-time performances for multiple frameworks.

5 Classification Results and Real-time Map Building

In this section, we give comparative results from trained 6 deep-learning models: VGG-16, ResNet50, Xception, InceptionResNetV2, EfficientNet-B0 and EfficientNet-B4. They were trained from scratch, on the 90-10 split 47K data set to see how they would compare as far as accuracy and execution time (training + validation) was concerned. Table 28 shows the common configurations and the hyper-parameters that were shared across the frameworks. Table 29

shows the algorithm-specific configurations that were shaped using heuristics or default/suggested values. For the first 4 algorithms, we used the official Keras implementations and applied transfer learning as usual. However, we used the following implementation of EfficientNet

777https://github.com/qubvel/efficientnet.

Hyper-parameter : Value
Optimizer : Rectified Adam
Learning Rate : 0.0001
Loss Func. : Categorical Crossentropy
Activation Func. : Softmax
Global Pooling : Average
Batch Size : 16
Max Epochs : 12
0-1 Rescaling : Yes
Training Set : 42606
Validation Set : 4736
Target Classes : 5
Augmented Training : Yes
Table 28: Common configurations

Accuracies and the execution times of these experiments can be seen in Figures 5 and  6 respectively. As far as the accuracy is concerned, we can see that VGG-16 and ResNet lag behind the XCeption, InceptionResNetV2 frameworks and the most recent EfficientNet frameworks compete well. In this particular set of runs, the highest observed validation accuracy was 90.6% after epoch 6 of EfficientNet-B4 (1200ms execution time) while the others take turns to achieve the highest validation accuracy in other epochs. For the execution times, EfficientNet-B0 is the winner (600ms execution time). It is only bested by ResNet50 (400ms execition time) which nonetheless has poorer validation accuracy (85.67%).

VGG-16 RN-50 XCep. IRN-V2 EN-B0 EN-B4 Input Dims (224, 224) (224, 224) (299,299) (299, 299) (256, 256) (256, 256) Global Pooling None Average Average Average Average Average Top Layers H-D-H-D D D D D D FC Neurons 4096 0 0 0 0 0 Dropout Rate 0.4 0.4 0.2 0.4 0.2 0.4 Total Layers 25 178 135 783 233 470 Total Prms. 134M 23M 20M 54M 4M 18M Trainable Prms 134M 23M 20M 54M 4M 18M Nontrain. Prms 0 53K 54K 60K 42K 125K

Table 29: Algorithm-specific configurations. H: Hidden. D: Dropout.
Figure 5: Training & Validation accuracies over 12 epochs for 6 algorithms

EfficientNet-B0 takes about half the time of its competitors, suggesting that it may be a good trade-off. Its highest validation accuracy was 90.3%. VGG-16 with transfer learning proved to be very useful for data acquisition and pseudo-labelling with limited hardware resources, throughout this project. However, it seems more recent frameworks like XCeption, InceptionResNetV2 and EfficientNet performed better and they are all shown to be good candidates for our problem, provided that the hardware to handle them is available.

Figure 6: Execution times (training + validation + model saving) over 12 epochs for 6 algorithms. Top: Per epoch. Bottom: Cumulative
Figure 7: An example map of 782 classified camera images over Canada and the United States at 2100 UTC 11 January 2020. Each marker represents one classified image with the legend indicating the colour corresponding to each respective class

Once all images in the pipeline have been classified, the data is stored both in csv files and can also be uploaded to a PostgreSQL database, if desired. These output data contain the image name, latitude and longitude, and class. A map plotting program then takes the data and produces a map for the desired domain. An example output map for all of North America is shown in Figure 7 with 782 classified images using the EfficientNetB4 framework.

6 Conclusion

In this paper, we have presented a detailed account of the process of generating training examples of images from live camera feeds depicting a wide range of road and weather conditions in North America. Weather understanding is an important part of many real-world applications that involves transportation and road safety. Our process involved leveraging deep convolutional neural networks in conjunction with manual labelling to produce reasonable quality training examples. The proposed application pipeline includes a map building component which is the one of the main outcomes of this research. We demonstrated that recent deep convolutional neural networks were able to produce good results with a maximum accuracy of 90.9% without any pre-processing of the images. The choice of these frameworks and our analysis take into account unique requirements of real-time map building functions. So, in addition to the classification accuracy, model complexity and memory usage must be taken into account during the different stages of dataset labelling.

Our experiments show that with an increasing number of training examples available with more diverse content, the results tend to improve. With sufficiently large training set, the performance of these frameworks on other benchmarks also seemed to reflect well into this particular problem, with newer ones with higher benchmark scores in other problems also performing better [39].

Future research directions include experimenting with other resource demanding frameworks with good potential such as NASNet-Large [40] and EfficientNet-B7 [38] on more advanced hardware. In addition, we will consider employing ensemble learning techniques [41] in order to exploit the best aspects of multiple frameworks. We plan to seek the possibilities to incorporate road detection and segmentation [42, 43] as a preprocessing step to crop the images and focus on the road segments. This might be especially helpful on images with low visibility of the roads such as cameras placed far away from the roads.

References

  • [1] L.-P. Crevier, Y. Delage, METRo: A New Model for Road-Condition Forecasting in Canada, Journal of Applied Meteorology 40 (11) (2001) 2026–2037.
    URL https://doi.org/10.1175/1520-0450(2001)040<2026:MANMFR>2.0.CO;2
  • [2] B. H. Sass, A Numerical Forecasting System for the Prediction of Slippery Roads, Journal of Applied Meteorology 36 (6) (1997) 801–817.
    URL https://doi.org/10.1175/1520-0450(1997)036<0801:ANFSFT>2.0.CO;2
  • [3] S. Drobot, A. R. S. Anderson, C. Burghardt, P. Pisano, U.s. public preferences for weather and road condition information, Bulletin of the American Meteorological Society 95 (6) (2014) 849–859.
    URL https://doi.org/10.1175/BAMS-D-12-00112.1
  • [4] J. Carrillo, M. Crowley, G. Pan, L. Fu, Comparison of Deep Learning models for Determining Road Surface Condition from Roadside Camera Images and Weather Data, in: Transportation Association of Canada and Intelligent Transportation Systems Canada Joint Conference, 2019, pp. 1–16.
  • [5] H. Kurihata, T. Takahashi, I. Ide, Y. Mekada, H.Murase, Y. Tamatsu, T. Miyahara, Rainy weather recognition from in-vehicle camera images for driver assistance, in: IEEE Proceedings, Intelligent Vehicles Symposium, 2005, pp. 205–210.
  • [6] N. Hautiere, J.-P. Tarel, J. Lavenant, D. Aubert, Automatic fog detection and estimation of visibility distance through use of an onboard camera, Mach. Vision Appl. 17 (1) (2006) 8–20.
  • [7] M. Roser, F. Moosmann, Classification of weather situations on single color images, in: IEEE Proceedings, Intelligent Vehicles Symposium, 2008, pp. 798–803.
  • [8] X. Yan, Y. Luo, X. Zheng, Weather recognition based on images captured by vision system in vehicle, in: Proceedings of the 6th International Symposium on Neural Networks: Advances in Neural Networks - Part III, 2009, pp. 390–398.
  • [9] S. Bronte, L. M. Bergasa, P. F. Alcantarilla, Fog detection system based on computer vision techniques, in: 2009 12th International IEEE Conference on Intelligent Transportation Systems, 2009, pp. 1–6.
  • [10] R. Omer, L. Fu, An automatic image recognition system for winter road surface condition classification, 13th International IEEE Conference on Intelligent Transportation Systems (2010) 1375–1379.
  • [11] N. H. R. Gallen, A. Cord, D. Aubert, Towards night fog detection through use of in-vehicle multipurpose cameras, in: Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), 2011, pp. 399–404.
  • [12] M. Pavli, H. Belzner, G. Rigoll, S. Ili, Image based fog detection in vehicles, in: Proceedings of the 2012 IEEE Intelligent Vehicles Symposium (IV), 2012, pp. 1132–1137.
  • [13] Z. Zhang, H. Ma, Multi-class weather classification on single images, in: 2015 IEEE International Conference on Image Processing (ICIP), 2015, pp. 4396–4400.
  • [14] E. J. Almazan, Y. Qian, J. H. Elder, Road segmentation for classification of road weather conditions, in: G. Hua, H. Jégou (Eds.), Computer Vision – ECCV 2016 Workshops, Springer International Publishing, Cham, 2016, pp. 96–108.
  • [15]

    M. Amthor, B. Hartmann, J. Denzler, Road condition estimation based on spatio-temporal reflection models, in: J. Gall, P. Gehler, B. Leibe (Eds.), Pattern Recognition, Springer International Publishing, Cham, 2015, pp. 3–15.

  • [16] S. G. Narasimhan, S. K. Nayar, Shedding light on the weather, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2003, pp. I–I.
  • [17] L. Shen, Photometric stereo and weather estimation using internet images, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 1850–1857.
  • [18] P.-Y. Laffont, Z. Ren, X. Tao, C. Qian, J. Hays, Transient attributes for high-level understanding and editing of outdoor scenes, ACM Transactions on Graphics 33 (4) (2014) 149:1–149:11.
  • [19]

    H. Song, Y. Chen, Y. Gao, Weather condition recognition based on feature extraction and k-nn, in: F. Sun, D. Hu, H. Liu (Eds.), Foundations and Practical Applications of Cognitive Systems and Information Processing, Springer Berlin Heidelberg, Berlin, Heidelberg, 2014, pp. 199–210.

  • [20] Q. Li, Y. Kong, S. M. Xia, A method of weather recognition based on outdoor images, in: Computer Vision Theory and Applications (VISAPP), 2014 International Conference, 2014, pp. 510–516.
  • [21] J. Schmidhuber, Deep learning in neural networks: An overview, Neural Networks 61 (2015) 85–117.
  • [22] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (7553) (2015) 436.
  • [23] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: Advances in Neural Information Processing Systems 25, 2012, pp. 1097–1105.
  • [24] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, L. Fei-Fei, ImageNet Large Scale Visual Recognition Challenge, International Journal of Computer Vision (IJCV) 115 (3) (2015) 211–252.
  • [25] C. Lu, D. Lin, J. Jia, C. Tang, Two-class weather classification, IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (12) (2017) 2510–2524.
  • [26] D. Lin, C. Lu, H. Huang, J. Jia, Rscm: Region selection and concurrency model for multi-class weather recognition, IEEE Transactions on Image Processing 26 (9) (2017) 4154–4167.
  • [27] B. Zhao, X. Li, X. Lu, Z. Wang, A cnn-rnn architecture for multi-label weather recognition (2019).
  • [28] M. Elhoseiny, S. Huang, A. M. Elgammal, Weather classification with deep convolutional neural networks, 2015 IEEE International Conference on Image Processing (ICIP) (2015) 3349–3353.
  • [29] Z. Zhu, L. Zhuo, P. Qu, K. Zhou, J. Zhang, Extreme weather recognition using convolutional neural networks, 2016 IEEE International Symposium on Multimedia (ISM) (2016) 621–625.
  • [30] J. C. Villarreal Guerra, Z. Khanam, S. Ehsan, R. Stolkin, K. McDonald-Maier, Weather classification: A new multi-class dataset, data augmentation approach and comprehensive evaluations of convolutional neural networks, in: 2018 NASA/ESA Conference on Adaptive Hardware and Systems (AHS), 2018, pp. 305–310.
  • [31] Z. Li, Y. Jin, Y. Li, Z. Lin, S. Wang, Imbalanced adversarial learning for weather image generation and classification, in: 2018 14th IEEE International Conference on Signal Processing (ICSP), 2018, pp. 1093–1097.
  • [32] G. Pan, L. Fu, R. Yu, M. Muresan, Winter road surface condition recognition using a pre-trained deep convolutional neural network, arXiv preprint abs/1812.06858.
    URL http://arxiv.org/abs/1812.06858
  • [33] M. Nolte, N. Kister, M. Maurer, Assessment of deep convolutional neural networks for road surface classification, arXiv preprint abs/1812.08872.
    URL https://arxiv.org/abs/1804.08872
  • [34] K. Simonyan, A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv preprint arXiv:1409.1556.
  • [35] K. He, X. Zhang, S. Ren, J. Sun, Deep Residual Learning for Image Recognition, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  • [36] C. Szegedy, S. Ioffe, V. Vanhoucke, A. Alemi, Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, in: AAAI, Vol. 4, 2016, p. 12.
  • [37] F. Chollet, Xception: Deep learning with depthwise separable convolutions, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1800–1807.
  • [38] M. Tan, Q. V. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, CoRR abs/1905.11946.
    URL http://arxiv.org/abs/1905.11946
  • [39] S. Bianco, R. Cadène, L. Celona, P. Napoletano, Benchmark analysis of representative deep neural network architectures, CoRR abs/1810.00736.
    URL http://arxiv.org/abs/1810.00736
  • [40] B. Zoph, V. Vasudevan, J. Shlens, Q. V. Le, Learning transferable architectures for scalable image recognition, CoRR abs/1707.07012.
    URL http://arxiv.org/abs/1707.07012
  • [41] R. Maclin, D. W. Opitz, Popular ensemble methods: An empirical study, CoRR abs/1106.0257.
    URL http://arxiv.org/abs/1106.0257
  • [42] Y. Lyu, X. Huang, Road segmentation using CNN with GRU, CoRR abs/1804.05164.
    URL http://arxiv.org/abs/1804.05164
  • [43] P. Chen, H. Hang, S. Chan, J. Lin, Dsnet: An efficient CNN for road scene segmentation, CoRR abs/1904.05022.
    URL http://arxiv.org/abs/1904.05022