DeepAI
Log In Sign Up

Key Ingredients of Self-Driving Cars

06/07/2019
by   Rui Fan, et al.
0

Over the past decade, many research articles have been published in the area of autonomous driving. However, most of them focus only on a specific technological area, such as visual environment perception, vehicle control, etc. Furthermore, due to fast advances in the self-driving car technology, such articles become obsolete very fast. In this paper, we give a brief but comprehensive overview on key ingredients of autonomous cars (ACs), including driving automation levels, AC sensors, AC software, open source datasets, industry leaders, AC applications and existing challenges.

READ FULL TEXT VIEW PDF
08/12/2021

Reimagining an autonomous vehicle

The self driving challenge in 2021 is this century's technological equiv...
04/24/2022

Six Levels of Autonomous Process Execution Management (APEM)

Terms such as the Digital Twin of an Organization (DTO) and Hyperautomat...
11/15/2021

An Embarrassingly Pragmatic Introduction to Vision-based Autonomous Robots

Autonomous robots are currently one of the most popular Artificial Intel...
02/05/2020

FRSign: A Large-Scale Traffic Light Dataset for Autonomous Trains

In the realm of autonomous transportation, there have been many initiati...
12/06/2020

Computer Stereo Vision for Autonomous Driving

For a long time, autonomous cars were found only in science fiction movi...
11/16/2018

Optimizing Passenger Comfort in Cost Functions for Trajectory Planning

Current advances in the development of autonomous cars suggest that driv...
03/27/2020

Maneuver-based Driving for Intervention in Autonomous Cars

The way we communicate with autonomous cars will fundamentally change as...

I Introduction

Over the past decade, with a number of autonomous system technology breakthroughs being witnessed in the world, the race to commercialize Autonomous Cars (ACs) has become fiercer than ever [1]. For example, in 2016, Waymo unveiled its autonomous taxi service in Arizona, which has attracted large publicity [2]

. Furthermore, Waymo has spent around nine years in developing and improving its Automated Driving Systems (ADSs) using various advanced engineering technologies, e.g., machine learning and computer vision

[2]. These cutting-edge technologies have greatly assisted their driver-less vehicles in better world understanding, making the right decisions, and taking the right actions at the right time [2].

Owing to the development of autonomous driving, many scientific articles have been published over the past decade, and their citations111https://www.webofknowledge.com are increasing exponentially, as shown in Fig. 1. We can clearly see that the numbers of both publications and citations in each year have been increasing gradually since 2010 and rose to a new height in the last year. However, most of autonomous driving overview articles focus only on a specific technological area, such as Advanced Driver Assistance Systems (ADAS) [3], vehicle control [4], visual environment perception [5], etc. Therefore, there is a strong motivation to provide readers with a comprehensive literature review on autonomous driving, including systems and algorithms, open source datasets, industry leaders, autonomous car applications and existing challenges.

Ii AC Systems

ADSs enable ACs to operate in a real-world environment without intervention by Human Drivers (HDs). Each ADS consists of two main components: hardware (car sensors and hardware controllers, i.e., throttle, brake, wheel, etc.) and software (functional groups).

Software has been modeled in several different software architectures, such as Stanley (Grand Challenge) [6], Junior (Urban Challenge) [7], Boss (Urban Challenge) [8] and Tongji AC [9]. Stanley [6] software architecture comprises four modules: sensor interface, perception, planning and control, as well as user interface. Junior [7] software architecture has five parts: sensor interface, perception, navigation (planning and control), drive-by-wire interface (user interface and vehicle interface) and global services. Boss [8] uses a three-layer architecture: mission, behavioral and motion planning. Tongji’s ADS [9] partitions the software architecture in: perception, decision and planning, control and chassis. In this paper, we divide the software architecture into five modules: perception, localization and mapping, prediction, planning and control, as shown in Fig. 2, which is very similar to Tongji ADS’s software architecture [9]. The remainder of this section introduces driving automation levels, presents the AC sensors and hardware controllers.

Fig. 1: Numbers of publications and citations in autonomous driving research over the past decade.

Ii-a Driving Automation Levels

According to the Society of Automotive Engineers (SAE international), driving automation can be categorized into six levels, as shown in Table I [10]. HD is responsible for Driving Environment Monitoring (DEM) in level 0-2 ADSs, while this responsibility shifts to the system in level 3-5 ADSs. From level 4, the HD is not responsible for the Dynamic Driving Task Fallback (DDTF) any more, and the ADSs will not need to ask for intervention from the HD at level 5. The state-of-the-art ADSs are mainly at level 2 and 3. A long time may still be needed to achieve higher automation levels [11].

Level Name Driver DEM DDTF
0 No automation HD HD HD
1 Driver assistance HD & system HD HD
2 Partial automation System HD HD
3 Conditional automation System System HD
4 High automation System System System
5 Full automation System System System
TABLE I: SAE Levels of Driving Automation
Fig. 2: Software architecture of our proposed ADS.

Ii-B AC Sensors

The sensors mounted on ACs are generally used to perceive the environment. Each sensor is chosen as a trade-off between sampling rate, Field of View (FoV), accuracy, range, cost and overall system complexity [12]. The most commonly used AC sensors are passive ones (e.g., cameras), active ones (e.g., Lidar, Radar and ultrasonic transceivers) and other sensor types, e.g., Global Positioning System (GPS), Inertial Measurement Unit (IMU) and wheel encoders [12].

Cameras capture 2D images by collecting light reflected on the 3D environment objects. The image quality is usually subject to the environmental conditions, e.g., weather and illumination. Computer vision and machine learning algorithms are generally used to extract useful information from captured images/videos [13]. For example, the images captured from different view points, i.e., using a single movable camera or multiple synchronized cameras, can be used to acquire 3D world geometry information [14].

Lidar illuminates a target with pulsed laser light and measures the source distance to the target, by analyzing the reflected pulses [15]. Due to its high 3D geometry accuracy, Lidar is generally used to create high-definition world maps [15]. Lidars are usually mounted on different parts of the AC, e.g., roof, side and front, for different purposes [16, 17]. Radars can measure accurate range and radial velocity of an object, by transmitting an electromagnetic wave and analyzing the reflected one [18]. They are particularly good at detecting metallic objects, but can also detect non-metallic objects, such as pedestrians and trees, in a short distance [12]. Radars have been established in the automotive industry for many years to enable ADAS features, such as autonomous emergency braking, adaptive cruise control, etc [12]. In a similar way to Radar, ultrasonic transducers calculate the distance to an object by measuring the time between transmitting an ultrasonic signal and receiving its echo [19]. Ultrasonic transducers are commonly utilized for AC localization and navigation [20].

GPS, a satellite-based radio-navigation system owned by the US government, can provide time and geolocation information for ACs. However, GPS signals are very weak and they can be easily blocked by obstacles, such as buildings and mountains, resulting in GPS-denied regions, e.g., in the so-called urban canyons [21]. Therefore, IMUs are commonly integrated into GPS devices to ensure AC localization in such places [22]. Wheel encoders are also prevalently utilized to determine the AC position, speed and direction by measuring electronic signals regarding wheel motion [23].

Ii-C Hardware Controllers

AC hardware controllers are torque steering motor, electronic brake booster, electronic throttle, gear shifter and parking brake. The vehicle states, such as wheel speed and steering angle, are sensed automatically and sent to the computer system via a Controller Area Network (CAN) bus. This enables either the HD or the ADS to control throttle, brake and steering wheel [24].

Iii AC Software

Iii-a Perception

The perception module analyzes the raw sensor data and outputs an environment understanding to be used by the ACs [25]. This process is similar to human visual cognition. Perception module typically includes object (free space, lane, vehicle, pedestrian, road damage, etc) detection and tracking, 3D world reconstruction (using structure from motion, stereo vision, etc), among others [26, 27]. The state-of-the-art perception technologies can be broken into two categories: computer vision-based and machine learning-based ones [28]. The former generally addresses visual perception problems by formulating them with explicit projective geometry models and finding the best solution using optimization approaches. For example, in [29]

, the horizontal and vertical coordinates of multiple vanishing points were modeled using a parabola and a quartic polynomial, respectively. The lanes were then detected using these two polynomial functions. Machine learning-based technologies learn the best solution to a given perception problem, by employing data-driven classification and/or regression models, such as the Convolutional Neural Networks (CNNs)

[30]. For instance, some deep CNNs, e.g., SegNet [31] and U-Net [32]

, have achieved impressive performance in semantic image segmentation and object classification. Such CNNs can also be easily utilized for other similar perception tasks using transfer learning (TL)

[25]. Visual world perception can be complemented by using other sensors, e.g., Lidars or Radars, for obstacle detection/localization and for 3D world modeling. Multi-sensor information fusion for world perception can produce superior world understanding results.

Iii-B Localization and Mapping

Using sensor data and perception output, the localization and mapping module can not only estimate AC location, but also build and update a 3D world map

[25]. This topic has become very popular since the concept of Simultaneous Localization and Mapping (SLAM) was introduced in 1986 [33]

. The state-of-the-art SLAM systems are generally classified as filter-based

[34] and optimization-based [35]. The filter-based SLAM systems are derived from Bayesian filtering [35]

. They iteratively estimate AC pose and update the 3D environmental map, by incrementally integrating sensor data. The most commonly used filters are Extended Kalman Filter (EKF)

[36], Unscented Kalman Filter (UKF) [37], Information Filter (IF) [38] and Particle Filter (PF) [39]. On the other hand, the optimization-based SLAM approaches firstly identify the problem constraints by finding a correspondence between new observations and the map. Then, they compute and refine AC previous poses and update the 3D environmental map. The optimization-based SLAM approaches can be divided into two main branches: Bundle Adjustment (BA) and graph SLAM [35]. The former one jointly optimizes the 3D world map and the camera poses by minimizing an error function using optimization techniques, such as the Gaussian-Newton method and Gradient Descent [40]. The latter one models the localization problem as a graph representation problem and solves it by finding an error function with respect to different vehicle poses [41].

Iii-C Prediction

The prediction module analyzes the motion patterns of other traffic agents and predicts AC future trajectories [42], which enables the AC to make appropriate navigation decisions. Current prediction approaches can be grouped into two main categories: model-based and data-driven-based [43]. The former computes the AC future motion, by propagating its kinematic state (position, speed and acceleration) over time, based on the underlying physical system kinematics and dynamics [43]. For example, Mercedes-Benz motion prediction component employs map information as a constraint to compute the next AC position [44]. A Kalman filter [45] works well for short-term predictions, but its performance degrades for long-term horizons, as it ignores surrounding context, e.g., roads and traffic rules [46]. Furthermore, a pedestrian motion prediction model can be formed based on attractive and repulsive forces [47]

. With recent advances in Artificial Intelligence (AI) and High-Performance Computing (HPC), many data-driven techniques, e.g., the Hidden Markov Models (HMM)

[48]

, Bayesian Networks (BNs)

[49]

and Gaussian Process (GP) regression, have been utilized to predict AC states. In recent years, researchers have modeled the environmental context using Inverse Reinforcement Learning (IRL)

[50]. For example, an inverse optimal control method was employed in [51] to predict pedestrian paths.

Iii-D Planning

The planning module determines possible safe AC navigation routes based on perception, localization and mapping, as well as prediction information [52]. The planning tasks can mainly be classified as path, maneuver and trajectory [53]. Path is a list of geometrical way points that the AC should follow, so as to reach its destination, without colliding with obstacles [54]. The most commonly used path planning techniques include: Dijkstra [55], dynamic programming [56], A* [57], state lattice [58], etc. Maneuver planning is a high-level AC motion characterization process, because it also takes traffic rules and other AC states into consideration [54]. The trajectory is generally represented by a sequence of AC states. A trajectory satisfying the motion model and state constraints must be generated after finding the best path and maneuver, because this can ensure traffic safety and comfort.

Iii-E Control

The control module sends appropriate commands to throttle, braking, or steering torque, based on the predicted trajectory and the estimated vehicle states [59]. The control module enables the AC to follow the planned trajectory as closely as possible. The controller parameters can be estimated by minimizing an error function (deviation) between the ideal and observed states. The most prevalently used approaches to minimize such error function are Proportional-Integral-Derivative (PID) control [60], Linear-Quadratic Regulator (LQR) control [61] and Model Predictive Control (MPC) [62]. A PID controller is a control loop feedback mechanism, which employs proportional, integral and derivative terms to minimize the error function [60]. LQR controller is utilized to minimize the error function, when the system dynamics are represented by a set of linear differential equations and the cost is described by a quadratic function [61]. MPC is an advanced process control technique which relies on a dynamic process model [62]. These three controllers have their own benefits and drawbacks. AC control module generally employs a mixture of them. For example, Junior AC [63] employs MPC and PID to complete some low-level feedback control tasks, e.g., for applying the torque converter to achieve a desired wheel angle. Baidu Apollo employs a mixture of these three controllers: PID is used for feed-forward control; LQR is used for wheel angle control; MPC is used to optimize PID and LQR controller parameters [64].

Iv Open Source Datasets

Over the past decade, many open source datasets have been published to contribute to autonomous driving research. In this paper, we only enumerate the most cited ones. Cityscapes [65] contains a large-scale dataset which can be used for both pixel-level and instance-level semantic image segmentation. ApolloScape [64] can be used for various AC perception tasks, such as scene parsing, car instance understanding, lane segmentation, self-localization, trajectory estimation, as well as object detection and tracking. Furthermore, KITTI [66] offers visual datasets for stereo and flow estimation, object detection and tracking, road segmentation, odometry estimation and semantic image segmentation. 6D-vision [67] uses a stereo camera to perceive the 3D environment. They offer datasets for stereo, optical flow and semantic image segmentation.

V Industry Leaders

Recently, investors have started to throw their money at possible winners of the race to commercialize ACs [68]. Tesla’s valuation has been soaring since 2016. This leads underwriters to speculate that this company will spawn a self-driving fleet in a couple of years’ time [68]. In addition, GM’s shares have risen by 20 percent, since their plan to build driver-less vehicles was reported in 2017 [68]. Waymo has tested its self-driving cars over a distance of eight million miles in the US until July 2018 [69]. Their Chrysler Pacifica mini-vans can now navigate on highways in San Francisco at full speed [68]. GM and Waymo had the fewest accidents in the last year: GM had 22 collisions over 212 kilometers, while Waymo had only three collisions over more than 563 kilometers [69]. In addition to the industry giants, world-class universities have also accelerated the development of autonomous driving. These universities are all doing well in carrying out their education with the combination of production and scientific research. This renders them better contribute to enterprises, economy and society.

Vi AC Applications

The autonomous driving technology can be implemented in any types of vehicles, such as taxis, coaches, tour buses, delivery vans, etc. These vehicles can not only relieve humans from labor-intensive and tedious work, but also ensure their safety. For example, the road quality assessment vehicles equipped with autonomous driving technology can repair the detected road damages while navigating across the city [13, 70, 71, 72]. Furthermore, public traffic will be more efficient and secure, as the coaches and taxis will be able to communicate with each other intelligently.

Vii Existing Challenges

Although the autonomous driving technology has developed rapidly over the past decade, there are still many challenges. For example, the perception modules cannot perform well in poor weather and/or illumination conditions or in complex urban environments [73]. In addition, most perception methods are generally computationally-intensive and cannot run in real time on embedded and resource-limited hardware. Furthermore, the use of current SLAM approaches still remains limited in large-scale experiments, due to its long-term unstability [35]. Another important issue is how to fuse AC sensor data to create more accurate semantic 3D word in a fast and cheap way. Moreover, “when can people truly accept autonomous driving and autonomous cars?” is still a good topic for discussion and poses serious ethical issues.

Viii Conclusion

This paper presents a brief but comprehensive overview on the key ingredients of autonomous cars. We introduced six driving automation levels. The details on the sensors equipped in autonomous cars for data collection and the hardware controllers were subsequently given. Furthermore, we briefly discussed each software part of the ADS, i.e., perception, localization and mapping, prediction, planning, and control. The open source datasets, such as KITTI, ApolloSpace and 6D-vision, were then introduced. Finally, we discussed current autonomous driving industry leaders, the possible applications of autonomous driving and the existing challenges in this research area.

References

  • [1] J. A. Brink, R. L. Arenson, T. M. Grist, J. S. Lewin, and D. Enzmann, “Bits and bytes: the future of radiology lies in informatics and information technology,” European radiology, vol. 27, no. 9, pp. 3647–3651, 2017.
  • [2] “Waymo safety report: On the road to fully self-driving,” https://storage.googleapis.com/sdc-prod/v1/safety-report/SafetyReport2018.pdf, accessed: 2019-06-07.
  • [3] R. Okuda, Y. Kajiwara, and K. Terashima, “A survey of technical trend of adas and autonomous driving,” in Proceedings of Technical Program - 2014 International Symposium on VLSI Technology, Systems and Application (VLSI-TSA), April 2014, pp. 1–4.
  • [4] D. González, J. Pérez, V. Milanés, and F. Nashashibi, “A review of motion planning techniques for automated vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 1135–1145, April 2016.
  • [5] A. Mukhtar, L. Xia, and T. B. Tang, “Vehicle detection techniques for collision avoidance systems: A review,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 5, pp. 2318–2338, Oct 2015.
  • [6] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann et al., “Stanley: The robot that won the darpa grand challenge,” Journal of field Robotics, vol. 23, no. 9, pp. 661–692, 2006.
  • [7] M. Montemerlo, J. Becker, S. Bhat, H. Dahlkamp, D. Dolgov, S. Ettinger, D. Haehnel, T. Hilden, G. Hoffmann, B. Huhnke et al., “Junior: The stanford entry in the urban challenge,” Journal of field Robotics, vol. 25, no. 9, pp. 569–597, 2008.
  • [8] C. Urmson, J. Anhalt, D. Bagnell, C. Baker, R. Bittner, M. Clark, J. Dolan, D. Duggins, T. Galatali, C. Geyer et al., “Autonomous driving in urban environments: Boss and the urban challenge,” Journal of Field Robotics, vol. 25, no. 8, pp. 425–466, 2008.
  • [9] W. Zong, C. Zhang, Z. Wang, J. Zhu, and Q. Chen, “Architecture design and implementation of an autonomous vehicle,” IEEE Access, vol. 6, pp. 21 956–21 970, 2018.
  • [10] SAE international, “Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles,” SAE International, (J3016), 2016.
  • [11] J. Hecht, “Lidar for self-driving cars,” Optics and Photonics News, vol. 29, no. 1, pp. 26–33, Jan. 2018.
  • [12] Felix, “Sensor set design patterns for autonomous vehicles.” [Online]. Available: https://autonomous-driving.org/2019/01/25/positioning-sensors-for-autonomous-vehicles/
  • [13] R. Fan, “Real-time computer stereo vision for automotive applications,” Ph.D. dissertation, University of Bristol, 2018.
  • [14] R. Fan, X. Ai, and N. Dahnoun, “Road surface 3d reconstruction based on dense subpixel disparity map estimation,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 3025–3035, 2018.
  • [15] “Lidar–light detection and ranging–is a remote sensing method used to examine the surface of the earth,” NOAA. Archived from the original on, vol. 4, 2013.
  • [16] J. Jiao, Q. Liao, Y. Zhu, T. Liu, Y. Yu, R. Fan, L. Wang, and M. Liu, “A novel dual-lidar calibration algorithm using planar surfaces,” arXiv preprint arXiv:1904.12116, 2019.
  • [17] J. Jiao, Y. Yu, Q. Liao, H. Ye, and M. Liu, “Automatic calibration of multiple 3d lidars in urban environments,” arXiv preprint arXiv:1905.04912, 2019.
  • [18] T. Bureau, “Radar definition,” Public Works and Government Services Canada, 2013.
  • [19] W. J. Westerveld, “Silicon photonic micro-ring resonators to sense strain and ultrasound,” 2014.
  • [20] Y. Liu, R. Fan, B. Yu, M. J. Bocus, M. Liu, H. Ni, J. Fan, and S. Mao, “Mobile robot localisation and navigation using lego nxt and ultrasonic sensor,” in Proc. IEEE Int. Conf. Robotics and Biomimetics (ROBIO), Dec. 2018, pp. 1088–1093.
  • [21] N. Samama, Global positioning: Technologies and performance.   John Wiley & Sons, 2008, vol. 7.
  • [22] D. B. Cox Jr, “Integration of gps with inertial navigation systems,” Navigation, vol. 25, no. 2, pp. 236–245, 1978.
  • [23] S. Trahey, “Choosing a code wheel: A detailed look at how encoders work,” Small Form Factors, 2008.
  • [24] V. Bhandari, Design of machine elements.   Tata McGraw-Hill Education, 2010.
  • [25] M. Maurer, J. C. Gerdes, B. Lenz, H. Winner et al., “Autonomous driving,” Berlin, Germany: Springer Berlin Heidelberg, vol. 10, pp. 978–3, 2016.
  • [26] R. Fan, V. Prokhorov, and N. Dahnoun, “Faster-than-real-time linear lane detection implementation using soc dsp tms320c6678,” in 2016 IEEE International Conference on Imaging Systems and Techniques (IST).   IEEE, 2016, pp. 306–311.
  • [27] R. Fan and N. Dahnoun, “Real-time implementation of stereo vision based on optimised normalised cross-correlation and propagated search range on a gpu,” in 2017 IEEE International Conference on Imaging Systems and Techniques (IST).   IEEE, 2017, pp. 1–6.
  • [28] B. Apolloni, A. Ghosh, F. Alpaslan, and S. Patnaik, Machine learning and robot perception.   Springer Science & Business Media, 2005, vol. 7.
  • [29] U. Ozgunalp, R. Fan, X. Ai, and N. Dahnoun, “Multiple lane detection algorithm based on novel dense vanishing point estimation,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 3, pp. 621–632, 2017.
  • [30] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “Deepdriving: Learning affordance for direct perception in autonomous driving,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2722–2730.
  • [31] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
  • [32] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention.   Springer, 2015, pp. 234–241.
  • [33] R. C. Smith and P. Cheeseman, “On the representation and estimation of spatial uncertainty,” The international journal of Robotics Research, vol. 5, no. 4, pp. 56–68, 1986.
  • [34] H. Ye, Y. Chen, and M. Liu, “Tightly coupled 3d lidar inertial odometry and mapping,” in 2019 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2019.
  • [35] G. Bresson, Z. Alsayed, L. Yu, and S. Glaser, “Simultaneous localization and mapping: A survey of current trends in autonomous driving,” IEEE Transactions on Intelligent Vehicles, vol. 2, no. 3, pp. 194–220, 2017.
  • [36] R. E. Kalman, “A new approach to linear filtering and prediction problems,” J. Basic Eng., vol. 82, p. 35, 1960.
  • [37] S. J. Julier and J. K. Uhlmann, “New extension of the kalman filter to nonlinear systems,” 1997.
  • [38] P. S. Maybeck and G. M. Siouris, “Stochastic models, estimation, and control, volume i,” vol. 10, pp. 282–282, 1980.
  • [39] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte Carlo localization for mobile robots,” in Proc. IEEE Int. Conf. Robotics and Automation (Cat. No.99CH36288C), vol. 2, May 1999, pp. 1322–1328 vol.2.
  • [40] S. S. Rao, Engineering optimization: theory and practice.   John Wiley & Sons, 2009.
  • [41] S. Thrun and J. J. Leonard, “Simultaneous localization and mapping,” Springer Handbook of Robotics, pp. 871–889, 2008.
  • [42] Y. Ma, X. Zhu, S. Zhang, R. Yang, W. Wang, and D. Manocha, “Trafficpredict: Trajectory prediction for heterogeneous traffic-agents,” arXiv preprint arXiv:1811.02146, 2018.
  • [43] S. Lefèvre, D. Vasquez, and C. Laugier, “A survey on motion prediction and risk assessment for intelligent vehicles,” ROBOMECH journal, vol. 1, no. 1, p. 1, 2014.
  • [44] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang et al., “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316, 2016.
  • [45] R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of basic Engineering, vol. 82, no. 1, pp. 35–45, 1960.
  • [46] N. Djuric, V. Radosavljevic, H. Cui, T. Nguyen, F.-C. Chou, T.-H. Lin, and J. Schneider, “Motion prediction of traffic actors for autonomous driving using deep convolutional networks,” arXiv preprint arXiv:1808.05819, 2018.
  • [47] D. Helbing and P. Molnar, “Social force model for pedestrian dynamics,” Physical review E, vol. 51, no. 5, p. 4282, 1995.
  • [48] T. Streubel and K. H. Hoffmann, “Prediction of driver intended path at intersections,” in 2014 IEEE Intelligent Vehicles Symposium Proceedings.   IEEE, 2014, pp. 134–139.
  • [49] M. Schreier, V. Willert, and J. Adamy, “An integrated approach to maneuver-based trajectory prediction and criticality assessment in arbitrary road environments,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 10, pp. 2751–2766, 2016.
  • [50] A. Y. Ng and S. J. Russell, “Algorithms for inverse reinforcement learning.” in Icml, vol. 1, 2000, p. 2.
  • [51] K. M. Kitani, B. D. Ziebart, J. A. Bagnell, and M. Hebert, “Activity forecasting,” in European Conference on Computer Vision.   Springer, 2012, pp. 201–214.
  • [52] C. Katrakazas, M. Quddus, W.-H. Chen, and L. Deka, “Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions,” Transportation Research Part C: Emerging Technologies, vol. 60, pp. 416–442, 2015.
  • [53] B. Paden, M. Čáp, S. Z. Yong, D. Yershov, and E. Frazzoli, “A survey of motion planning and control techniques for self-driving urban vehicles,” IEEE Transactions on intelligent vehicles, vol. 1, no. 1, pp. 33–55, 2016.
  • [54] D. González, J. Pérez, V. Milanés, and F. Nashashibi, “A review of motion planning techniques for automated vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 1135–1145, 2016.
  • [55] T. H. Cormen, “Section 24.3: Dijkstra’s algorithm,” Introduction to Algorithms, pp. 595–601, 2001.
  • [56] J. Jiao, R. Fan, H. Ma, and M. Liu, “Using dp towards a shortest path problem-related application,” arXiv preprint arXiv:1903.02765, Jiao2019.
  • [57] D. Delling, P. Sanders, D. Schultes, and D. Wagner, “Engineering route planning algorithms,” pp. 117–139, 2009.
  • [58] A. González-Sieira, M. Mucientes, and A. Bugarín, “A state lattice approach for motion planning under control and sensor uncertainty,” in ROBOT2013: First Iberian Robotics Conference.   Springer, Jan. 2014. [Online]. Available: http://dx.doi.org/10.1007/978-3-319-03653-3_19
  • [59] D. Gruyer, S. Demmel, V. Magnier, and R. Belaroussi, “Multi-hypotheses tracking using the dempster–shafer theory, application to ambiguous road context,” Information Fusion, vol. 29, pp. 40–56, 2016.
  • [60] M. Araki, “Pid control,” Control Systems, Robotics and Automation: System Analysis and Control: Classical Approaches II, pp. 58–79, 2009.
  • [61] G. C. Goodwin, S. F. Graebe, M. E. Salgado et al., Control system design.   Upper Saddle River, NJ: Prentice Hall,, 2001.
  • [62] C. E. Garcia, D. M. Prett, and M. Morari, “Model predictive control: theory and practice—a survey,” Automatica, vol. 25, no. 3, pp. 335–348, 1989.
  • [63] J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, J. Z. Kolter, D. Langer, O. Pink, V. Pratt et al., “Towards fully autonomous driving: Systems and algorithms,” in 2011 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2011, pp. 163–168.
  • [64] X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, and R. Yang, “The apolloscape dataset for autonomous driving,” in

    Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition Workshops (CVPRW)

    , Jun. 2018, pp. 1067–10 676.
  • [65]

    M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213–3223.
  • [66] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
  • [67] T. K. Hernán Badino, “A head-wearable short-baseline stereo system for the simultaneous estimation of structure and motion,” 2011.
  • [68] D. Welch and E. Behrmann, “Who’s winning the self-driving car race?” https://www.bloomberg.com/news/features/2018-05-07/who-s-winning-the-self-driving-car-race, accessed: 2019-04-21.
  • [69] C. Coberly, “Waymo’s self-driving car fleet has racked up 8 million miles in total driving distance on public roads,” https://www.techspot.com/news/75608-waymo-self-driving-car-fleet-racks-up-8.html, accessed: 2019-04-21.
  • [70] R. Fan, Y. Liu, X. Yang, M. J. Bocus, N. Dahnoun, and S. Tancock, “Real-time stereo vision for road surface 3-d reconstruction,” in 2018 IEEE International Conference on Imaging Systems and Techniques (IST).   IEEE, 2018, pp. 1–6.
  • [71] R. Fan, M. J. Bocus, and N. Dahnoun, “A novel disparity transformation algorithm for road segmentation,” Information Processing Letters, vol. 140, pp. 18–24, 2018.
  • [72] R. Fan, M. J. Bocus, Y. Zhu, J. Jiao, L. Wang, F. Ma, S. Cheng, and M. Liu, “Road crack detection using deep convolutional neural network and adaptive thresholding,” arXiv preprint arXiv:1904.08582, 2019.
  • [73] J. Van Brummelen, M. O’Brien, D. Gruyer, and H. Najjaran, “Autonomous vehicle perception: The technology of today and tomorrow,” Transportation research part C: emerging technologies, vol. 89, pp. 384–406, 2018.