Deep Learning applied to Road Traffic Speed forecasting

10/02/2017 ∙ by Thomas Epelbaum, et al. ∙ Mediamobile 0

In this paper, we propose deep learning architectures (FNN, CNN and LSTM) to forecast a regression model for time dependent data. These algorithm's are designed to handle Floating Car Data (FCD) historic speeds to predict road traffic data. For this we aggregate the speeds into the network inputs in an innovative way. We compare the RMSE thus obtained with the results of a simpler physical model, and show that the latter achieves better RMSE accuracy. We also propose a new indicator, which evaluates the algorithms improvement when compared to a benchmark prediction. We conclude by questioning the interest of using deep learning methods for this specific regression task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1.1 Introduction

Traffic congestion is one of the major downsides of our ever-growing cities. The inconvenience for individuals stuck in traffic jams can sometimes be counted in hours per day, and weeks per year. In this context, a road traffic speed forecasting algorithm could have a highly beneficial impact: it could feed a Dynamic Routing System (DRS), and allow one to anticipate the formation and the resorption of congestion. This could lead to intelligent recommandation to drivers, and point towards public measures for shifted departures to and from work (monetary incentives to companies/individuals…)

The speeds measured on the road network can be seen as a spatio-temporal time series. Although many models whether deterministic or stochastic have been created so far, these are either too time consuming to be used in real time, too demanding in terms of storage, or give too poor a result to prove valuable. Indeed the speeds present complex behaviors including seasonality, time dependency, spatial dependency and drastic quick changes of patterns, making it uneasy to forecast. Yet since the evolution of the speeds rely on a real behavior, these time series present strong correlations and causality links that must be found.

In this paper, we focus on the design and implementation of Neural Networks (NN) to handle the forecast of time series applied to the case of road traffic. Using NN allows one to include a large number of explicative variables inside the model to capture the complexity of the time series. For this we study three kinds of NN architectures – Feedforward (FNN), Convolutional (CNN) and Recurrent (RNN) – to deal with the road traffic speed forecasting task.

There has been a growing interest for deep learning in the recent years, see for instance [22]

and references therein. Among all different kind of networks, CNN are mostly used to classify images as in

[21] or [40]

, and RNN are mostly used for Natural Language Processing tasks

[27], where one tries to guess the next word in a sentence given the previous ones. Yet for a regression task when dealing with time dependent data, few results exist in the statistical literature.

In this work, we propose a way of feeding the NN to enhance the causality due to the seasonality of the observations which will be compared to the usual one. In addition we propose specific designs of the neural networks to forecast the data, each adapted to the type of NN used in this study. Moreover, to evaluate the quality of the forecast, in addition to the standard Root Mean Square Error (RMSE), we propose a new indicator, conceived so as to assess the quality of the model for time series forecast, completing the information conveyed by RMSE. Within this framework, we show that the Convolutional network severely underperforms the other NN variants and give some reasons why this may be. We also present a simple physical model, embedding the seasonality of the road traffic, that outperforms all the deep learning techniques while using order of magnitudes less parameters.

The paper divides as follow. We present the speed data at our disposal in Section 1.2.1, and explain how we feed them to the NN in Section 1.2.2. The new criterion used to evaluate the quality of the different models is explained in Section 1.3. The models themselves -FNN, CNN, RNN-LSTM – are presented in Section 1.4. We finally present the setup of our different simulations and our numerical results in Section 1.5.

1.2 Data

1.2.1 Speeds from FCD measures

We deal here with speed forecasting on a road network. A road network is an oriented graph. On each oriented edge a speed is computed (via a technique called Floating Car Data, or FCD, see for instance in  [36]). A road network has to be described by several oriented edges as the speed limit, the topology, the presence of a traffic light… change for different zones of the network. On a edge of a road network, the speed at time is denoted . In our study, the speeds are available every 3 minutes, so that comes 3 minutes after .

In practice, we will normalize speeds. This procedure helps to identify similar speed patterns: formation of congestion or its resorption , without being affected by the absolute speed values. Otherwise, a classification algorithm would for instance fail to cluster together two edges which never experience any traffic jams if their speed limits are respectively 50 and 130 kph. To normalize the speeds, one needs to know the so-called free flow speed as defined in [7]. The free flow speed on Paris Ring Road is roughly . We will thus from now on deal with the series of observations

In this paper, we will study the external Paris ring road: 396 oriented edges in our network.

Figure 1.1: External Paris Ring Road.

The training set will be data from the first 9 months of 2016, the test set belonging to tenth month of 2016. Note that to avoid predicting on trivial traffic states of constant speeds, we first removed the speed data from 11pm to 5 am of each day of our 10 month sample.

1.2.2 Input speeds

In addition to the freedom of design in the FNN, CNN and RNN architectures, the way the past speeds are taken into account in the input of the networks play a crucial role in the results that one can obtain. Shall we take nearby edge past speed information, or can each edge predict its future speeds knowing only its past ones ? How far into the past shall we go ? Hence the creation of the learning set determines the behaviour of the model and should be studied with care.

We will consider two kinds of input for the different NN architecture that we implement: a full input (large quantity of data) and a reduced one (supposed to capture the seasonality of the road traffic)

Full input

The full input of the NN corresponds to data on the current day (where one is trying to predict future speeds) and the immediate past days. On these days we consider contiguous edges of our graph. Starting at an arbitrary edge and an arbitrary time , the full input of the network is then

(1.2.1)

with

where the subscript stands for full. The index represents the sample of our training set, and D is a constant representing a full day in 3 minute intervals (480). We are thus considering the minutes of speed data on the current day (just before ), and a time window of minutes before and minutes after on past days. In practice we take

This corresponds to input variables with

The prediction task corresponding to this full input is

with and in practice we will take hence an output of size The full input and the corresponding output is illustrated in Figure 1.2 for a given sample of the training set.

Figure 1.2: full input used for the different NN architectures (left). Prediction example (right)

Here, the output of the network (on the right part of Figure 1.2) is the subsequent 20 three minute time intervals of the links of the current instance of the training set (time between 6:06 and 7:03 am). On the left part of Figure 1.2, the eight colored matrices correspond to the traffic on the current day (matrix in the foreground, time between 4:30 and 6:03 am) and the traffic on the seven previous days (matrices in the background, time between 5:18 and 6:51 am).

Reduced input

Starting back from our arbitrary edge and arbitrary time , the reduced input deals with one edge. We thus forget the index in this part. The reduced input reads

(1.2.2)

where (the subscript stands for reduced)

The signification of and are unchanged (see Section 1.2.2). In practice we will take

However, the way in which the previous days are taken into account is changed as follows. Instead of taking high resolution but noisy 3 minutes intervals, we will average over such intervals, so that

In practice we will take

The reduced input will thus be of size

The corresponding prediction task will now be with and in practice we will take and the output will be of size . The reduced input and its corresponding output is illustrated in Figure 1.3

Figure 1.3: Reduced input example.

On Figure 1.3, the traffic is represented for 8 contiguous days on the ’th link of the Paris road network. The first 7 days (x axis between 0 and 7) are considered to be past days, and are pictured averaged per quarter-hours (corresponding to our choice). The 8th day (x axis between 7 and 8) is depicted by 3 minute intervals. characterizes the starting time at which one tries to predict afterwards. In Figure 1.3, this corresponds to the green part of the curve (see main plot and inset). The are the blue part of the curve (see both main plot and inset) while the are shown in red.

1.3 Model evaluation

1.3.1 RMSE and Q-score

Generally, in the speed forecasting paradigm[5],[9],[28],[35][37],[39] one evaluates the quality of a prediction algorithm considering the Root Mean Square Error (RMSE)

where is the speed predicted by the algorithm under consideration, the ground truth speed, (19 in our study) covering all the time intervals where the prediction is taking place, and all the links of the road network considered. In our opinion, this indicator is not sharp enough to correctly assess the quality of a prediction algorithm. Indeed, for a constant time series, any RMSE close to 0 could fool people into thinking that the prediction algorithm is a good one, despite being worse than the prediction of taking a constant. At the other extreme, for a widely and swiftly changing time series, a large RMSE does not necessarily imply that the algorithm is a poor one.

To paliate this apparent paradox, it is necessary to introduce a new benchmark prediction. In our study we chose the real time propagation benchmark RTPB (taking for any future speed ), which is what is widely used in industrial DRS described for instance in [8]. An algorithm with a low RMSE is not worth much interest if it constantly predicts worse than this simple benchmark, and an algorithm with a large RMSE but constantly beating the benchmark may be worth considering; it might just be that the road network under investigation is experiencing a large change of speed. We thus introduce

(1.3.1)

where

This Q-score (or in the following) quantifies the improvement (or deterioration) of the considered algorithm when compared with the RTPB prediction. If then there is an improvement (a equal to 1 would mean a perfect prediction), otherwise if there is a deterioration. Without this specification, any speed prediction on an almost constant speed road network could lead to a very low RMSE without nevertheless being good, and one might be tempted to choose these kind of networks to beat ”state of the art” RMSE. We hasten to add that this is not what was done by the papers that we reviewed [37, 9], but it is notwithstanding hard to evaluate a traffic prediction algorithm having in mind real life applications if one solely consider the RMSE. Some studies[5, 25] use an alternative indicator: the Mean Absolute Percentage Error (MAPE).

Not using any benchmark algorithm, the MAPE is however subject to the same drawbacks as the RMSE when used alone.

Loss function

The loss function is a critical component of a NN architecture. For a classification task, the cross-entropy loss function is a standard choice. But in this study, as we are trying to make quantitative speed predictions that could feed a DRS, we will take a quadratic loss function

where is the ground truth speed and the output of the NN under consideration. See Section 1.2.2 for more details on the other notations.

Note that considering this particular loss function means that we are considering the speed forecasting problem as a regression task.

corresponds to the loss error on one sample of the training set, and we still have to decide how to train the NN. We make the standard choice of mini-batch Stochastic Gradient Descent (SGD) as in

[19]

, at each epoch

–one iteration of the training procedure, we pick samples of the training set, and compute the mini-batch loss

(1.3.2)

To achieve better accuracy, we had to regularize the Loss function. This is achieved using an penalty, also known as elastic net penalty, see for instance in [41], detailed in Section 1.4.1.

In this study we picked and . The value of is indicative, as we performed early stopping in order to improve accuracy as it is often done by users of NN as in [2] to overcome the issue of the convergence of the algorithm.

1.4 The models

We present the different NN studied, starting with their common blocks and ending with their specificities

1.4.1 Common properties of the Networks

Output function

As we deal with normalized speed, we constrain the output of our model to be between zero and one. We hence picked an output function such that

Batch Normalization

Batch normalization (BN) is the most popular regularization procedure and consists in jointly normalizing the mini-batch sets per data types at each input of a NN layer, except for the input of the network itself. This is because we want to keep track of what the data represents, hence keep their mean and standard deviation untouched. It should be mentioned from the outset that BN is an empirical procedure, though it has been shown to drastically improve state of the art performance on classification tasks for instance on CIFAR/MNIST on many challenges.

In the original paper[18], the authors argued that this step should be done after the weight averaging/convolution (WA/C) operation and before the non-linear activation (NAC).

However, this choice stands on no theoretical grounds, and defeats the purpose of presenting a standardardized input to each NN layer. In addition, the back-propagation rules are a bit more cumbersome to write with the WA/C-BN-NAC convention. We therefore opted for a WA/C-NAC-BN architecture for all the layers – except of course the pooling ones where there is nor an NAC neither a BN – presented in this paper.

For the technical details on our BN implementation, see Section 1.9 of the supplementary material.

Elastic net

In practice, using equation (1.3.2) for the loss function leads to poor results: the different NN that we considered never reach a positive . We thus had to regularize the loss function. This we did by using both and penalties, a procedure which turns our NN to so-called elastic nets. Calling the weight matrix between the ’th and the ’th layer of our NN, this procedure amounts to add the following terms to the loss function

With () the size of the ’th (’th) layer of the NN. These new terms play a role in the update rules of the weight matrices as shown in [6].

In our numerical simulations, we took and to be in the range , the best value being picked by cross-validation.

No Dropout

In our study, it turned out that dropout – with the probability

of retaining a unit varying between for both the input and hidden layers – only slows down convergence without reducing the RMSE. We therefore chose to discard dropout from our architectures.

Adam Optimizer

For the mini-batch SGD used in backpropagation

[24], we used the Adam optimizer proposed for instance in [20] which keeps track of both the weight gradient (

) and its square via two epoch dependent vectors

and . For a given weight matrix , we thus have

with and parameters respectively set to and , as advocated in the original paper[20]. We also observed that the final RMSE is poorly sensitive to the specific values. The different weight matrices are then updated thanks to ()

This is the optimization technique used for all our NN, along side a learning rate decay where determined by cross-validation, and is usually initialized in the range . Without this weight decay, the NN perform poorly.

Activation function

We used Leaky-ReLU activation functions in all our simulations (slight improvement on the standard ReLU choice), see for instance in

[1]. We also tested without success the ELU choice.

1.4.2 Specificities of each Network

Feedforward Neural Networks

For our FNN, we considered both one and three hidden layer architectures, with the WA-NLA-BN structure advocated in Section 1.4.1. We added no bias to the WA equation, as the latter is handled by the BN procedure, see the supplementary material for more details. We took a unique size for the hidden layers, and varied it in between .

We did not consider ResNet architectures[16] in this study. This might be a natural following step, but in the present state of the art techniques for regression task with NN, we are pessimistic on any improvement this might achieve. Indeed, as shown in our results, our 1 hidden layer FNN outperforms its 3 layer cousin.

Input of the FNN

: To feed our FNN, we just stuck together either the full or the reduced input (see equations (1.2.1) and (1.2.2)) into a matrix . Here is the mini-batch index and for respectively the full and reduced inputs.

Weight initialization

: we used the standard prescription[11]

Convolutional Neural Networks

In our CNN study, we implemented a 16 weights VGG architecture [31] (ResNet could be considered in a future study). The input feature map is of size , while the images composing the input are of size for the current day and for the previous days. In practice this corresponds to eight images, and we therefore implemented the standard convolutions using receptive fields with strides and a padding equal to on each image edge. We use pooling of receptive fields with strides. The size of the hidden feature maps has been taken in between , and the size of the subsequent fully connected layers in between . More details on the architecture can be found in Section 2 of the supplementary material.

Input of the CNN

: For the CNN, we only considered the full input introduced in Section 1.2.2. Figure 1.2

makes complete sense for CNNs, as its left part represents the different feature maps of the input. The latter is now a four dimensional tensor

with t the mini-batch index and

Weight initialization

: We used the same prescription as for FNN.

1.4.3 Recurrent Neural Networks

For our RNN, we used the LSTM variant [38] with no peepholes in the input, forget and output gates. The latter are taken to be standard logistic functions, while the cell update as well as the hidden state update are taken to be functions. We performed BN as in the Feedforward/Convolutional before each input of a hidden layer. We took one hidden layer in the spatial direction and in the temporal one.

Input of the RNN-LSTM

: an additional subtlety arises for RNN: the input depends on the temporal direction of the network. For the full input, we just took ( here stands for with for the reduced input)

implying that the input of a current temporal layer has to be fed with the output of the previous ones. We did just so in practice. We took the same as for FNNs.

Weight initialization

: We used a diagonal prescription inspired from [32] ( is the unit matrix, )

No clipping

: Clipping [14] did not improve our model performances. We thus removed it.

1.5 Results

With all the building blocks in place, we report here in Figures 1.4 and 1.5 the compared performance of all the algorithms.

1.5.1 Full input

The RMSE of our different NN can be found for the full input in Figure 1.4. Here one can see several particular patterns.

Figure 1.4: RMSE for the different models studied.

Figure 1.5: Q-score for the different models studied.
Poor CNN performance

: the CNN severely underperforms the FNN and the RNN-LSTM. As the evolution of the car speeds is supposed to be a spatio-temporal field, considering the daily spatio-temporal velocities as images that characterize the traffic seemed to be a good frame to use CNN that proved helpful in pattern recognition. Yet road traffic speed prediction is not a translationally invariant problem in the temporal direction and no matter how the CNN weight matrices are tuned, the network assumes that the input image is translational invariant. The CNN hence fails to capture the fact that the right part of the input images in Figure

1.2 play a more important role that the left part, since it represents more recent traffic states. We extend our remark to any time series problem: the CNN seems ill equipped to handle prediction task properly in the case where there is strong causality in the time direction. Note however that another study in [37] reports otherwise on some specific Chinese road networks, while only feeding the CNN with past traffic states of the current day (feature map of size 1 for the input layer). This may be due to some particular conditions in Chinese trafic. Yet the performance of NN are better than standard methods based on density models [26].

Similar FNN/RNN-LSTM performance

: The two other kind of networks that we considered performed similarly, with a RMSE between .

A simpler model performs better

: We developed a simpler physical model (PM) embedding the seasonality of the traffic. This model outperforms all NN, reaching state of the art RMSE value. Note that this work will be published in a future patent.

utility

: We can assess the quality of our prediction when compared to the RTPB. We observe that FNN outperforms RNN-LSTM, despite having an almost identical RMSE. PM outperforms all NN, especially at early times.

Storage issues for real life applications

: the best FNN-3/VGG/LSTM models that we obtained have respectively // weight parameters, not counting the BN ones. Our best PM has parameters in total. In addition, it has a high degree of generalizability, allowing to mutualize the parameters for large road networks.

1.5.2 Reduced input

Figure 1.6: RMSE for the reduced input

We present in Figure 1.6 and 1.7 the results obtained – by two FNN architectures and the physical model – for the reduced input. From these Figures we draw

Figure 1.7: Qbench for for the reduced inputs
The reduced input outperforms the Full one

: The full input, though with a large number of explanatory variables, leads to worse results in terms both of and RMSE. This might be due to either a poor initial condition choice – we hope not – or to the regression problem specificities.

The shallow FNN outperforms the deep one

: FNN-1 gives slightly better results than FNN-3.

The PM still outperforms NN:

but not by much.

1.5.3 Importance of the Q-score

Figure 1.8: RMSE for different speed regimes

We have studied three speed regimes for the PM: the ”constant” (”changing”) one is the 10 part of the training set where the speeds before and after vary the less (more). The standard regime is the remaining 80. Results are shown in Figures 1.8 and 1.9

Figure 1.9: Q-score for different speed regimes
Constant speeds lead to better RMSE

: The less the speed vary, the better the RMSE is, as expected.

Changing speeds lead to better

: Having low RMSE is not on its own a sign of a working model. The PM can’t beat the Benchmark for ”constant” speeds, while beating it by more than 50 after 50 minutes for the changing speeds. One should therefore jointly state RMSE and in time series tasks.

1.6 Conclusion

We have implemented popular deep learning architectures, adapting their designs to the specific regression problem of predicting future road speeds, a generic example for time series presenting strong causality issues in both time and space. We showed that the CNN underperforms the other networks while we built a PM that outperforms all NN architectures. We show that feeding the NN with more data leads to worst results, as does adding more layers to the FNN. We finally designed a new indicator, the , to be used jointly with the RMSE in time series problems.

2.1 Feedforward Neural Networks

Feedforward networks (FNN) are an extension of the perceptron algorithm

[29]. Their architecture is simple, but training them can be a daunting task. In the next few Sections, we introduce the minimum FNN mathematical survival kit, stating what – to the best of our knowledge – are the empirical tricks and what stands on firmer mathematical ground.

2.1.1 Some notations and definitions

In the following, we call

  • the number of layers (not counting the input) in the Neural Network.

  • the number of training examples in the training set.

  • the number of training instances in a mini-batch (more on that later).

  • the number of neurons in the

    ’th layer.

  • the mini-batch training instance index.

  • the index of the layer under consideration in the FNN.

  • where the neurons of the ’th layer.

  • where the input variables.

  • where the output variables (to be predicted).

  • where the output of the FNN.

  • for , and the weight matrices.

  • A bias term can be included. In practice, we see when talking about the batch-normalization procedure that we can omit it, and we choose to do so in all our definitions.

2.1.2 FNN architecture

Figure 2.1: Neural Network with layers ( hidden layers). For simplicity of notations, the index referencing the training set has not been indicated. Shallow architectures (considered in the core of the paper) use only one hidden layer. Deep learning amounts to take several hidden layers, usually containing the same number of hidden neurons. This number should be in the ballpark of the average of the number of input and output variables.

A FNN is made of one input layer, one (shallow network) or more (deep network, hence the name deep learning) hidden layers and one output layer. Each layer of the network (except the output one) is connected to a following layer. This connectivity is central to the FNN structure and has two main features in its simplest form: a weight averaging feature and an activation feature. We review these features extensively in the following subsections

2.1.3 Weight averaging

Figure 2.2: Weight averaging procedure.

One of the two main components of a FNN is a weight averaging procedure, which amounts to average the previous layer with some weight matrix to obtain the next layer. This is illustrated in Figure 2.2

Formally, the weight averaging procedure reads:

(2.1.1)

Here we have delibarately ommited a potential bias term, as it is handled in Section 2.1.9 where we talk about batch normalization. In practice, for all our numerical simulations, the weights are learned using the backpropagation procedure[24] with the Adam optimizer method for gradient descent[20].

2.1.4 Activation function

The hidden neuron of each layer is defined as

(2.1.2)

where is an activation function – the second main ingredient of a FNN – whose non-linearity allows the prediction of arbitrary output data. In practice, is usually taken to be either a sigmoid, a

a Rectified Linear Unit

[15] (ReLU), or its variants: Leaky ReLU, ELU…[4]. The ReLU is defined as

(2.1.3)

Its derivative is

(2.1.4)

The choice of the activation function usually follows empirical tests, though it has been formally shown [3],[30] that Neural Networks cannot converge if the activation function is too complicated. With these building blocks in place, let us consider the different layers of the network one at a time.

2.1.5 Input layer

As explained in the core of the paper, we consider a full and a reduced input. But for all purposes here, we deal with an input layer, with and .

2.1.6 Fully connected layer

The fully connected operation is just the conjunction of the weight averaging and the activation procedure. Namely, for

(2.1.5)

and for

(2.1.6)

In the case where , the activation function is replaced by an output function (see the next section).

2.1.7 Output layer

The output of the FNN reads

(2.1.7)

where is called the output function. In the case of the Euclidean loss function (se the next section), the output function is just the identity. Nevertheless, in our case we know that the normalized speeds cannot take values greater than 1 and smaller than 0, we therefore impose this constraint on the output of our FNN and take

(2.1.8)

2.1.8 Loss function

The loss function evaluates the error made by the FNN when it tries to estimate the data to be predicted. As explained in the core of the paper for a regression problem this is generally the mean square error (MSE)

(2.1.9)

To obtain better results, one usually regularizes the Loss function. In addition to Batch normalization (explained in the next section), we also use and regularization. This amounts to add the following terms to the loss function

(2.1.10)

These new terms play a role in the update rules of the weight matrices[6].

2.1.9 Batch Normalization

As explained in the core of the paper, Batch normalization (BN) consists in jointly normalizing the mini-batch sets per data types at each input of a NN layer, except for the input of the network itself. In our case, we thus consider for

(2.1.11)

Here, and are respectively the mean and the standard deviation of with respect to the batch index . To make sure that the Batch Normalization operation can also represent the identity transform, we standardly add two additional parameters to the model (learned by backpropagation [6])

(2.1.12)

The presence of the coefficient

is what pushed us to get rid of the bias term. Indeed, it is now naturally included in batchnorm. During training, one must compute a running sum for the mean and the variance, that serve for the evaluation of the cross-validation and the test set. calling

the current epoch

(2.1.13)
(2.1.14)

and what is used at test time is

(2.1.15)

so that at test time

(2.1.16)

In practice when using BN, and as advocated in the original paper[18], one can get rid without loss of precision of the former most popular regularization technique before BN introduction: Dropout[33]. We adopt this convention in the following, as our test experienced a loss of precision with the joint use of BN and dropout. This might be due to the peculiarity of our problem which is a regression and not a classification task. To the best of our knowledge, the litterature on NN for regression task is pretty scarce. Going back to the FNN structure, this induces the following change to the weight averaging formula of Section 2.1.6: for

(2.1.17)

2.1.10 Architecture considered in practice

Schematically denoting a hidden fully-connected unit as (with the WA/NLA/BN order advocated in the core of the paper)

Figure 2.3: A FNN fully connected layer

and the FNN output unit as

Figure 2.4: The FNN output layer

we consider in the core of our paper the following two FNN architectures. A one hidden layer FNN depicted in Figure 2.5

Figure 2.5: FNN with one hidden layer

and a three hidden layer FNN depicted in Figure 2.6

Figure 2.6: FNN with three hidden layers

In practice one could consider a lot of other FNN architectures: Resnet[16], highway Nets[34], DenseNets[17]… This could be the object of future studies.

2.2 Convolutional Neural Networks

Convolutional Neural Network (CNN) is a kind of network architecture particularly adapted to image classification, be it numbers or animal/car/… category. In this Section we review the novelty involved when dealing with CNN when compared to FNN as introduced in Section 2.1, and do so for our regression task at hand. The most fundamental novelties are the two building blocks of CNN: convolution and pooling operations. Before presenting them, let us introduce some more notations specific to the CNN architectures

2.2.1 CNN new specific notations and definitions

In addition or in replacement to the notations introduced in Section 2.1.1, we denote in the following

  • , the number of feature maps in the ’th layer.

  • and , respectively the width and the height of the ’th feature map.

  • where , and , the ’th layer components.

  • where , and , the input variables.

  • where , and , the output variables (to be predicted).

  • where , and , the output of the CNN.

  • for , , , and , the weight matrices.

  • and , respectively, the receptive field and the stride of the convolution operation. Unless stated otherwise, these are kept the same for all the convolutions.

  • and , respectively the receptive field and the stride of the pooling operation. Unless stated otherwise, these are kept the same for all the poolings.

2.2.2 CNN architecture

The CNN architecture involves convolutions (see Section 2.2.4), pooling (see Section 2.2.5) as well as an input, an output and fully connected (similar to those a FNN, see Section 2.1.6) layers. Here is a possible CNN architecture : an input is convolved with a first weight matrix then pooled, convolved with a second weight matrix then pooled, then a fully connected operation occurs before the output is computed.

Figure 2.7: A typical CNN architecture (in this case LeNet[23] inspired): convolution operations are followed by pooling operations, until the size of each feature map is reduced to one. Fully connected layers can then be introduced.

The fully connected layers, the output layers and the loss function are unchanged (see Sections 2.1.6, 2.1.7 and 2.1.8). The batch normalization procedure is also used, and can easily be adapted from what has been presented in Section 2.1.9. Let us now see the new and modified layers.

2.2.3 Input layers

As explained in the core of the paper, we only consider the full input for CNN. But for all purposes here, we deal with an input layer, with , and .

2.2.4 Convolutional layers

The convolution operation that gives its name to the CNN is the fundamental building block of this type of network. It amounts to convolute a feature map of a hidden layer input with a weight matrix to give rise to an output feature map. The weight is really a four dimensional tensor, one dimension () being the number of feature maps of the convolutional input layer, another () the number of feature maps of the convolutional output layer. The two others give the width and the height of the receptive field. The receptive field allows one to convolute a subset instead of the whole input image. It aims at searching for similar patterns in the input image, no matter where the pattern is (translational invariance). We saw in the core of our paper the problem that this induces. The width and the height of the output image are determined by the receptive field as well as by the stride: it is simply the number of pixels by which one slides in the vertical and/or the horizontal direction before applying the convolution operation again. The central convolution formula for the ’th CNN layer (involving the ’th weight matrix for the ’th convolution operation) is

(2.2.1)

Here belongs to and to . This implies the following relation between , and ,

(2.2.2)

One then computes the hidden units via the ReLU activation function introduced in Section 2.1.4

(2.2.3)

2.2.5 Pooling layers

The pooling operation, less and less used in the current state of the art CNN[16], is fundamentally a dimension reduction step. It amounts to take the maximum of a sub-image (characterized by the pooling receptive field and a stride ) of the input feature map , to obtain an output feature map of width and height

(2.2.4)

The max pooling procedure (That we use here instead of the other possible choice, the average pooling) reads for any

(2.2.5)

The hidden unit is then just

(2.2.6)

2.2.6 Architecture considered in practice

Schematically denoting a hidden Convolution unit as

Figure 2.8: The structure of a convolution layer

we consider in the remaining of this paper the following CNN architecture: the so-called VGG CNN[31], which was the 2014 state of the art convolutional network

Figure 2.9: The structure of the VGG CNN.

All the other details of the model are in the core of the paper.

2.3 Recurrent Neural Networks - Long Short Term Memory

In this Section, we review the third kind of Neural Network architecture used in this paper: Recurrent Neural Networks. The specificity of this kind of network is that the time dependency of the data is built into the model. We briefly present the first Recurrent Neural Network (RNN) architecture, as well as the current most popular one: the Long Short Term Memory (LSTM) Neural Network. We use the latter in this study.

2.3.1 RNN new specific notations and definitions

In contrast to the previously discussed neural networks, where we defined

(2.3.1)

we now have hidden layers that are indexed by both a ”spatial” and a ”temporal” index, and the general philosophy of the RNN is (now the is usually characterized by a for cell state, this denotation, trivial for the basic RNN architecture will make more sense when we talk about LSTM networks)

(2.3.2)

Here, the notations are

  • , the number of layers (not counting the input) in the spatial direction.

  • , the number of layers (not counting the first one) in the temporal direction (so that in a RNN corresponds to a standard FNN).

  • , where , and , the ’th layer components.

  • , where and , the input variables (more on that in Section 2.3.5).

  • , where , the output variables (to be predicted).

  • , where , the output of the RNN.

  • for , , , the weight matrices in the ”spatial” direction of the RNN.

  • for , , , the weight matrices in the ”temporal” direction of the RNN.

2.3.2 RNN architecture

An example of a RNN architecture with spatial and temporal layers is depicted in Figure 2.10

Figure 2.10: RNN architecture, with data propagating both in ”space” and in ”time”. In our example, the temporal dimension is of size while the spatial one is of size .

Note that the weight matrices do not vary along the temporal direction, and this RNN only has weight matrices (indicated in Figure 2.10).

2.3.3 RNN hidden layer

The FNN formula of Section 2.1.6 is replaced in a RNN by (note also the change in the activation function, a standard choice in the RNN litterature[12])

(2.3.3)

As this network has fallen out of favor because of vanishing gradient issues in backpropagation[XX], we now turn our attention to the LSTM RNN architecture that we use in the core of the paper.

2.3.4 LSTM hidden layer

In a Long Short Term Memory Neural Network[10], the state of a given unit is not directly determined by its immediate spatial and temporal neighbours. Instead, a cell state is updated for each hidden unit, and the output of this unit is a probe of the cell state. Several gates are introduced in the process: the input gate determines if we allow new information to enter into the cell state. The output gate determines if we set or not the output hidden value to , or really probe the current cell state. Finally, the forget state determines if we forget or not the past cell state. All these concepts are illustrated in Figure 2.11

Figure 2.11: LSTM hidden unit details

Considering all the variable values to be when , we get the following formula for the input, forget and output gates: