We show that jointly predicting the future steering angle and vehicle speed may be improved through segmentation maps and data augmentation techniques. Motivated by the use of semantic segmentation models for self driving vehicles [9, 7], we concatenate the segmentation maps with the images as the input instead of using the maps as an additional learning objective. We augment the data set by applying image transformations such as mirroring, adjusting brightness, and geometric transformation. We ensemble three neural network architectures to achieve our best performance. To appreciate the dataset and task, Figure 1 shows a sample of the training images and Figure 2 shows a sample of the test images.
2 Related Works
CNNs have be trained to predict steering angle given only the front camera view . As humans have a wider perceptional field than the front camera, 360-degree view datasets have been collected with additional geo-locations [5, 6]. Neural network models have been trained end-to-end using 360-degree views which are specifically useful for navigating cities and crossing intersections. As noted, map data improves the steering angle prediction . Long-term dependencies can also be captured by processing the visual input and adding steering wheel trajectories into memory networks . A classical method is to extract hidden features using pre-trained CNNs and process them through a sequence model, namely a LSTM. Predicting segmentation masks can be added to the loss, which improves the overall performance . Instead of using segmentation masks in the loss, in this work we concatenate the segmentation maps to the input.
We implemented a number of preprocessing steps that made our model faster to train, and also more robust to variation in the input images. In particular, we used downsampling and various forms of data augmentation to alter the raw image files in the original data set before inputting it into our models. Furthermore, we will illustrate the three model architectures that we found to give the lowest MSE on speed and angle, and which were eventually combined to provide our best submission.
One of the key innovations we focus on is using segmentation masks as inputs in addition to the raw images. Our hypothesis is that the segmentation mask cleans up a lot of unnecessary information in the image, and gives the model a clearer view of the road as well as indicators of speed. The other idea that we tried out is to avoid pretained CNNs and train them from scratch. Stacking the images to one input for a CNN trains faster than using a recurrent network. Other methods feed single images into a recurrent network, such as an LSTM. We experienced that the CNN with an LSTM cannot be trained efficiently.
These models in combination with the data preprocessing and augmentation complemented each other to provide the second best overall performance in the competition.
Instead of using the full dataset at 10 frames per second, we down-sampled in time 1:10 to one frame per second. We then downsampled spatially by 1:12 in each dimension from 19201080 to 160
90 for efficiency purposes. The images are also normalized with the mean and standard deviations in the training set.
Using the NVIDIA semantic segmentation model pre-trained on the Cityscapes dataset , we created segmentation masks for each of the images in the dataset. The original pre-trained model contained 34 classes, but as we only wanted to focus on objects that might influence the steering angle, we took only 19 classes including road, car, parking, wall, etc. .
The data underwent several types  of data augmentation that were randomly administered to 80% of the training data.
Randomly flip the images horizontally with probability 0.5 (steering angle is multiplied by -1 in order to offset the horizontal flip)
Randomly change the brightness by a factor between 0.2 and 0.75 with probability 0.1
Randomly shift the image left/right/up/down and adjust the angle with left/right shift with probability 0.25
The final model consisted of an ensemble of three models: A, B, and C. Model A, A_CNN_single_image, takes as input a single image and its corresponding segmentation mask. The input is passed through a DenseNet121 architecture, then into two fully connected towers, one for predicting the speed, one for the steering angle, as visualized in Figure 3. The towers contain three dense layers of size 200, 50 and 10. In between each dense layer we conduct batch normalization, and apply a ReLU non-linearity. The final output is a real-valued number which is the predicted speed or steering angle, which has been denormalized given the mean and standard deviation from the training set.
The architecture of model B is shown in Figure 4. B_CNN_stacked, takes as input a full sequence of 10 images with their corresponding segmentation masks which are concatenated together. Similar to model A, the input is passed through DenseNet121, and into the speed and steering angle regressor towers which are the same as in model A.
Figure 5 illustrates model C, C_Bi_GRU. It takes as input a sequence of 10 images, and their corresponding segmentation masks, similar to model B, except that they are not concatenated together. Each image in the input sequence is passed individually through a pre-trained ResNet34 model, a pre-trained DenseNet201, and model A_CNN_single_image
. The models are pre-trained on ImageNet, and not on a task that directly involves vehicle images. The resulting outputs are concatenated and passed through an intermediate layer which contains two dense layers of size 512 and 128 (with dropout). The output is then passed into a bi-directional GRU cell. This occurs for every input image/mask pair in the sequence, and the output of the final GRU cell is concatenated with the input to the previous fully connected layer which also passed through a dense block. This is then fed into the two towers for the speed and steering angle prediction, where the size of the layers is 256, 128 and 32.
We used a similar method to train all the models, with slight changes in learning rate and learning rate decay which was tuned over multiple runs. The key insights were that our models did not require many epochs to reach their lowest validation MSE, and that a simple average of the results of two models outperforms each one individually.
Hyper-parameters of network training
In each training run of the model, we save only the best model for speed and the best model for steering angle, which are then separately stored and used for predicting values on the test set.
used the Adam optimizer with an initial learning rate of 0.0003. We implemented learning rate decay to 0.0001, 0.00005, and 0.00003 after the first 5, 15, and 20 epochs. Overall the model was trained for 90 epochs, and we used a batch size of 13 for training, validation and testing. The loss criterion used for both speed and steering angle was MSE loss, and the overall model loss was defined as the summation of the speed loss and steering angle loss.
was trained in an identical was as the previous model, but the lowest MSE loss was achieved after the 14th epoch.
was trained using the Adam optimizer with an initial learning rate of 0.003, which is halved after epochs 20, 30 and 40. We use the same combination loss criterion for as was used in models A and B where we sum the MSE from speed and steering angle. The model was trained for a total of 50 epochs, but the best model was achieved after only the 10th epoch.
The final submission involved taking the average of the results for two of the models used. Doing this caused the MSE for speed to drop from 6.115 to 5.312, and the MSE for steering to drop from 925 to 901, putting us in second place for the competition.
Our best performing single model for speed was model C_Bi_GRU with the lowest MSE on the test set at 6.115, while the best performing single model for steering angle was model Model B_CNN_stacked with MSE loss at 925.926. The results are summarized in Table 1. Model B performed the best for steering as a result of the combination of data augmentation which included horizontal flipping of images in a sequence, and the full view of the sequence to the input of the model.
|Model||MSE Speed||MSE Angle|
In this work, we showed several improvements for predicting steering angle and speed given a sequence of images. First, concatenating segmentation masks with the input images adds side information in the pre-trained segmentation neural network. Second, data augmentation techniques enhanced the performance. Third, stacking the images and segmentation mask sequences into a single input improves the steering predictions significantly. Finally, taking into account multiple models yields our best result.
We would like to thank the Columbia University students of the Fall 2019 Deep Learning class for their participation in the challenge. Specifically, we would like to thank Xiren Zhou, Fei Zheng, Xiaoxi Zhao, Yiyang Zeng, Albert Song, Kevin Wong, and Jiali Sun for their participation.
-  Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler,
Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele.
The cityscapes dataset for semantic urban scene understanding.In , June 2016.
-  Tharindu Fernando, Simon Denman, Sridha Sridharan, and Clinton Fookes. Going deeper: Autonomous steering with neural memory networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 214–221, 2017.
-  Forson, Eddie. Teaching Cars To Drive Using Deep Learning — Steering Angle Prediction. 2017.
-  Simon Hecker, Dengxin Dai, and Luc Van Gool. End-to-end learning of driving models with surround-view cameras and route planners. In Proceedings of the European Conference on Computer Vision (ECCV), pages 435–453, 2018.
-  Simon Hecker, Dengxin Dai, and Luc Van Gool. Learning accurate, comfortable and human-like driving. arXiv preprint arXiv:1903.10995, 2019.
Yuenan Hou, Zheng Ma, Chunxiao Liu, and Chen Change Loy.
Learning to steer by mimicking features from heterogeneous auxiliary
Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 8433–8440, 2019.
-  Lovjer, Antonia and Yeom, Minsu and Goyal, Manik and Drori, Iddo. GitHub Repository for Winning the ICCV 2019 Learning to Drive Challenge. https://github.com/AntoniaLovjer/learntodrive, 2019.
-  Huazhe Xu, Yang Gao, Fisher Yu, and Trevor Darrell. End-to-end learning of driving models from large-scale video datasets. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2174–2182, 2017.
-  Fitsum A. Reda Kevin J. Shih Shawn Newsam Andrew Tao Bryan Catanzaro Yi Zhu*, Karan Sapra*. Improving semantic segmentation via video propagation and label relaxation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.