Reimagining an autonomous vehicle

08/12/2021 ∙ by Jeffrey Hawke, et al. ∙ 0

The self driving challenge in 2021 is this century's technological equivalent of the space race, and is now entering the second major decade of development. Solving the technology will create social change which parallels the invention of the automobile itself. Today's autonomous driving technology is laudable, though rooted in decisions made a decade ago. We argue that a rethink is required, reconsidering the autonomous vehicle (AV) problem in the light of the body of knowledge that has been gained since the DARPA challenges which seeded the industry. What does AV2.0 look like? We present an alternative vision: a recipe for driving with machine learning, and grand challenges for research in driving.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The modern autonomous driving industry draws its roots from the DARPA Urban Challenge in 2007 [9]. Since then, we have seen rapid growth of research in both academic and industrial settings. The majority of this has been in industry, as many technical problems are beyond the resources of academic laboratories. The challenge has been harder than anticipated, with a widely held view that the industry has been humbled by this experience. Nevertheless, the technology continues to advance and it is likely that we are beyond Gartner’s trough of disillusionment [5], supported by increasing public milestones and further investment.

A key question this provokes is simply: are autonomous vehicles (AVs) solved? All indications point to ‘no’. AV technology has not easily scaled thus far, hence there must be some gap. A challenge in assessing this question is that the industry is tight-lipped, due to the competition and the economic opportunity on offer. Our assessment in this paper is through our own experience developing autonomous driving systems, with additional insight from industry publicity, research, and conversations.

We assess the primary factors for this lack of delivery to be: 1) technical scalability, 2) safety-critical engineering effort, 3) unit economics (profitability of an AV based on utilisation and costs), 4) regulation. Of these, we argue that the primary factor is technical scalability. By this, we mean the ability of the decision-making software systems to generalize to new situations quickly with sufficient performance for deployment. First, let us discuss the other factors.

Making a safety-critical decision-making system of any type is no small engineering feat, but it is one that may be solved with time and investment. Similarly, we do not consider regulation to be the sole barrier. Regulators worldwide have shown a willingness to support autonomous driving [11, 4, 30], and we consider this simply a matter of time, pending technological performance. Finally, we do not believe that unit economics are a dominant factor. While retrofit AVs are costly [42], these costs have reduced with maturity and volume such that it could be addressed if sufficient utilisation and scale was technically possible. This scale problem is that one that venture capital models are very familiar with: invest until scale is achieved, bring costs down with scale, then extract profit. Given the availability of capital, we can discount this as the primary factor.

This leaves us with technical scalability as the underlying problem, which suggests that we have yet to really solve autonomous driving. Why might this be? We propose that we have been solving the problem in the wrong direction, and there is ample opportunity for research to further the field.

The technology that brought us from the DARPA era to today can be described as solving specialized general intelligence by combining components of even narrower intelligence. We refer to this as AV1.0. This was necessary given the constraints and goals of 2007, but arguably limits generalization to complex situations due to brittle decision-making. The alternative lies in AV2.0: a complete rethink of what it means to architect a decision-making system for driving with machine learning.

2 AV1.0: The limitations of a classical autonomous driver

AVs today are designed around the same deliberative robotics architecture, which is arguably an expansion of the sense-plan-act paradigm as in [45]. For a detailed assessment, we refer the reader to surveys such as [49, 3]. We argue that the key gap is decision-making intelligence. To assess this, let’s consider the problem addressed by classical AV components, and limitations on decision-making.

  1. Sensing. Problem: Can we observe sufficient information about the environment to make the correct driving decision? Limitation: Sensor development has focused on increasing range and fidelity, arguably due to the industry shift towards autonomous trucking business models on highways. Improved fidelity is certainly helpful, but this is not a limitation for non-highway applications as there is sufficient raw information present to make a decision.

  2. Scene Representation: building a hand-crafted representation of the world.

    1. Localization & mapping. Problem: Where is my robot in a known map? This simplifies real-time decisions by transferring part of the representation offline with clean, curated data [6]. Limitation: 1) maintenance complexity as the world changes, and 2) there is an unclear need if downstream decision-making were sufficiently robust.

    2. Perception. Problem: Can we extract sufficient context from the raw data for decision making? Limitation: Modern perception algorithms are extremely good, and a supervised approach is largely sufficient given data, time, and resources. We can perceive almost anything desired. Rather, the limitation is the hand-crafted representation itself. Do we have the information necessary for the decision? Furthermore, reducing sensor data to symbolic data may not generalize well, for example, failing to correctly interpret a pedestrian walking a bicycle given a hand-crafted taxonomy.

    3. Behavior prediction. Problem

      : Many decisions require an estimate of future state, which is complex due to the dynamic scene.

      Limitation: Prediction is sensitive to upstream error and has a dependency with planning. This means that an isolated prediction system will always have some representational error.

  3. Planning: decision-making, given a world representation.

    1. Behavior planning Problem: how should an autonomous vehicle achieve its goal (e.g., route) given the current representation. Limitation: 1) it is extremely difficult to determine whether the input representation is necessary and sufficient for all decisions; 2) it is difficult to separate this from behavioral prediction, as decisions made by the planner will influence other actors and thus the prediction; 3) behavioral planners can be described as a highly engineered expert system [43], which are well known for being brittle. Despite this immense human effort, we have reached the same conclusion as thirty years earlier: symbolic expert systems are inherently limiting [7].

    2. Motion planning. Problem: generate a metric trajectory for a short horizon, given local constraints [36]. Limitation: This works well, but cannot overcome limitations of upstream decisions or hand-crafted constraints.

  4. Control. Problem: How does a vehicle execute some trajectory? Limitation: Vehicle dynamics and control are relatively well understood with remaining challenges in low friction.

With the exception of behavior prediction and planning, we consider the majority of these to be sufficiently mature for driving based on the success of respective benchmarks. Incrementally increasing perception performance does not enable a human observer to now make an effective judgement of how a car should drive given this increase in information. While further gains may be had, we do not believe any of these areas will offer a step change to unlock scalable driving.

What evidence exists for behavior prediction and planning as the limiting factor? We refer to the increasing focus of the AV industry on research associated with motion prediction. Since 2018, we have seen a flurry of published research and datasets. Examples include Nutonomy’s NuScenes dataset [10], Waymo Open Motion Dataset [46], the Lyft Prediction Dataset [23], and Argo’s Argoverse [15]. A substantial focus of these is prediction, evidenced by [13, 24, 21, 51].

Given this landscape, there are two possible conclusions. Either, solving behavior prediction & planning as defined by these boundaries will enable self-driving, or we need a rethink of this decomposition to achieve an autonomous future. We think the former is unlikely to be sufficient alone.

3 AV2.0: The solution of a data-driven learned driver

Figure 1: How should we reimagine an autonomous vehicle, given the progress the scientific and engineering academy has made since 2007? We propose that the classical deliberative architecture needs a rethink. We reflect on the long-held ‘sense-plan-act’ paradigm (substantially simpler than the robot architectures used in modern autonomous vehicles), and pose the joint sensing and planning problem as one that may be solved by data.

3.1 Solving driving with data

With the challenges of the classical approach in mind, how should we best solve driving? It is hard to combine hand-crafted abstraction layers in a complex decision-making system without brittleness.

We have seen good progress in similar fields by posing a complex problem as one that is able to modeled end-to-end by data. Examples include natural language processing with GPT-3

[8], and in games with MuZero [33] and AlphaStar [47]. In these problems, the solution to the task was sufficiently complex that hand-crafted abstraction layers and features were unable to adequately model the problem. Driving is similarly complex, hence we claim that it requires a similar solution.

The solution we pursue is a holistic learned driver, where the driving policy may be thought of as learning to estimate the motion the vehicle should conduct given some conditioning goals. This is different to simply applying increasing amounts of learning to the components of a classical architecture, where the hand-crafted interfaces limit the effectiveness of data [38].

At a high level, the route to a learned driver may be expressed very simply as the following:

  1. Frame the driving problem as one that may be solved by data.

  2. Build a source of data, for sampling and curation of data at sufficient scale and diversity.

  3. Build a data engine, which is able to train and iterate effectively on this shifting corpus.

  4. Build an experimental environment to explore the problem, in simulation and reality.

  5. Iteratively improve driving, through changes in modeling, data, and problem formulation.

The key shift is reframing the driving problem as one that may be solved by data

. This means removing abstraction layers used in this architecture and bundling much of the classical architecture into a neural network, as outlined in Figure

1. This is frequently depicted as the ‘end-to-end’ approach to driving [40]. However, this does not mean we believe the driving problem is simply solved by attaching a neural network directly to vehicle actuators. For instance, classical control methods together with learned representation and abstraction layers are still very effective at trajectory following.

3.2 Grand challenges for learned driving

With this move away from modularity, we arrive at a deep learning model that encapsulates much of the function of a classical robotics architecture. We suggest posing driving as a model-based policy learning problem such as [19]: one where we learn both a predictive model of the world (conditioned on ego-vehicle actions) and a model-based policy. We may also retain many of the key inductive biases for driving [12, 52, 20] as part of this framework. However, this means more than learning a representation independently of control [39]. To build this next-generation autonomy architecture, we suggest grand challenges for driving research.

Work on many of these challenges exists in current research across robotics and machine learning. However, many of these wider research threads have yet to focus on autonomous driving.

  1. Vehicle adaptability: deployment with different sensors, vehicle platforms, and use cases. A learned driver needs to be agnostic to sensor types, sensor rigs, and robot dynamics. It should adapt the collective experience of all driving to its vehicle, e.g., [18, 29]. This enables using a policy or adaptation modules across heterogeneous fleets and cultures.

  2. Modeling real-world complexity: multi-agent, dynamic environments where future state is conditioned on current action [25]. To model the true scene dynamics, a learned driver needs to model prediction and planning jointly, where prediction is conditioned on action [19]. This enables learning from imagined states which increases sample efficiency and policy robustness.

  3. Learning from accessible off-policy data: train and evaluate from scalable data sources. Currently, much research is focused on online and on-policy learning, which is unlikely to have impact beyond toy reasoning problems. We need to learn in a way that is feasible [31], as on-policy scaling is extremely slow and costly [48]. A particular need is effective off-policy evaluation to compare driving decisions, e.g., [14].

  4. Safety under uncertainty: know when and what we don’t know. In addition to common learning failures [2], a learned driver needs to identify when possible decisions are uncertain [26, 25, 34]. This enables robust performance and a safety case, by revoking control to simpler but robust systems to safely halt the vehicle when no clear decision exists.

  5. Interpretability of failures: disentangle the causal factors in decisions. A learned driver needs to be able to identify causal factors [44] in decision-making where the encoding is sufficiently disentangled [22]. E.g., failing to stop for a traffic light by incorrectly associating it to our vehicle is distinct from failures to see it due to weather. This interpretability enables development and will enable verification [17].

  6. Generalization to new situations: a generalizable policy requires complete but lean generalizable representations. A learned driver needs to generalize to a new distribution every time it goes on the road [50], requiring repurposable driving knowledge [37]. For example, our driver needs to reason about a multitude of different bicycle types: road bikes, cargo bikes, and commuter e-bicycles share many similarities (e.g., use of dedicated cycle lanes), but have large dissimilarities in visual appearance and speed. Additionally, a learning-based AV stack is transferable to a new geographic environment with relative ease, if the representation generalizes. This is a revolution: no classical AV stack can do this.

  7. Driving reward: optimization criteria for society’s changing driving needs. A learned driver may start with supervision, but it may be difficult to go beyond human performance. Additionally, driving requirements are unlikely to be static, thus we must adapt to changing driving regulations as AVs become the dominant road user. For example, the following distance may be decreased to enable increased vehicle density on a largely automated road. We won’t have the opportunity to regather expert supervisory data, hence we need a learning signal to shape driving behavior accordingly. This is largely unexplored [32, 27].

In our opinion, truly solving learned autonomous driving requires solving each of these challenges. There are wider questions of using deep learning for decision-making which apply more broadly than driving, including ethics. We consider these beyond the scope of this paper.

4 Learned driving is not uniquely challenged by safety-critical requirements

Do safety-critical requirements mean that learned driving is impossible? This medium is insufficient for a comprehensive analysis, but in short, no. There is no inherent safety reason why we should not pursue a data-driven driver. This hinges on a number of key theses.

  1. Safety assurance of an AV depends on (1) the design of an architecture which includes, but is not limited to, the neural net, plus (2) an engineering effort in verification and validation. Core safety tools should apply with some thoughtful adaptation [28].

  2. Within the broader architecture of the AV, redundant safety may be achieved with interpretable methods designed to identify and resolve specific failure modes. These methods cannot offer the generalized decision-making that a neural net can, but they are able to ensure that a vehicle will not cause harm in a very specific way [41].

  3. Scalable safety arguments will use a large non-stationary corpus of empirical evidence. This is not unique to learned driving, and there is precedent in other domains such as in medicine [1]. We anticipate that AVs may benefit from similar methods to quickly revalidate the learned driver as part of a wider system, however, this is an open question.

5 What’s required to address these grand challenges?

To see significant progress in the twenty-first century’s space race, we suggest the following.

For the academic community, we encourage exploring the full embodied intelligence problem space [16] beyond just modeling, including data curriculum, sensor configuration, and robot geometry. Within modeling research, we encourage even more focus on off-policy learning and evaluation, to make better use of available data with increased research impact.

For the industry research community, we encourage sharing of data of rare events and data curricula for the full driving problem for benchmarks. Today most datasets describe sub-problems under the assumption that the set is sufficient for solving the whole [10, 46, 23, 15], which we consider insufficient for driving research. We also encourage continued open collaboration with academia by sharing progress and the insights on the problems gathered from real world testing at scale.

Beyond this, we see a need for increased availability of holistic simulators. Tools such as CARLA [35] are still nascent, and research benchmarking standards require more efforts to mature.

In solving the driving problem together, we have the potential to unlock both great societal value and also to discover and create embodied intelligence in the open world. We encourage research efforts on these grand challenges, to tackle one of the most complex problems of technical scalability.

References

  • [1] F. D. A. (FDA) (2021-04-01) The drug development process. Note: Accessed: 2021-06-03 External Links: Link Cited by: item 3.
  • [2] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané (2016) Concrete problems in AI safety. arXiv preprint arXiv:1606.06565. Cited by: item 4.
  • [3] C. Badue, R. Guidolini, R. V. Carneiro, P. Azevedo, V. B. Cardoso, A. Forechi, L. Jesus, R. Berriel, T. M. Paixão, F. Mutz, L. de Paula Veronese, T. Oliveira-Santos, and A. F. De Souza (2021) Self-driving cars: a survey. Expert Systems with Applications 165, pp. 113816. External Links: ISSN 0957-4174, Document, Link Cited by: §2.
  • [4] Bloomberg (2021-05-21) Germany Takes Step Toward Autonomous Driving on Public Roads. Note: Accessed: 2021-06-09 External Links: Link Cited by: §1.
  • [5] M. Blosch and J. Fenn (2018) Understanding Gartner’s Hype Cycles. Note: Accessed: 2021-05-19 External Links: Link Cited by: §1.
  • [6] G. Bresson, Z. Alsayed, L. Yu, and S. Glaser (2017) Simultaneous localization and mapping: a survey of current trends in autonomous driving. IEEE Transactions on Intelligent Vehicles 2 (3), pp. 194–220. External Links: Document Cited by: item 2a.
  • [7] R. A. Brooks (1990) Elephants don’t play chess. Robotics and autonomous systems 6 (1-2), pp. 3–15. Cited by: item 3a.
  • [8] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei (2020) Language models are few-shot learners. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33, pp. 1877–1901. External Links: Link Cited by: §3.1.
  • [9] M. Buehler, K. Iagnemma, and S. Singh (2009) The darpa urban challenge: autonomous vehicles in city traffic. 1st edition, Springer Publishing Company, Incorporated. External Links: ISBN 3642039901 Cited by: §1.
  • [10] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom (2019) NuScenes: a multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027. Cited by: §2, §5.
  • [11] California Department of Motor Vehicles (2021) Autonomous Vehicles. Note: Accessed: 2021-06-09 External Links: Link Cited by: §1.
  • [12] S. Casas, A. Sadat, and R. Urtasun (2021) MP3: A Unified Model to Map, Perceive, Predict and Plan. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    ,
    pp. 14403–14412. Cited by: §3.2.
  • [13] Y. Chai, B. Sapp, M. Bansal, and D. Anguelov (2020-30 Oct–01 Nov) MultiPath: multiple probabilistic anchor trajectory hypotheses for behavior prediction. In Proceedings of the Conference on Robot Learning, Proceedings of Machine Learning Research, Vol. 100, , pp. 86–99. External Links: Link Cited by: §2.
  • [14] Y. Chandak, S. Niekum, B. C. da Silva, E. Learned-Miller, E. Brunskill, and P. S. Thomas (2021) Universal Off-Policy Evaluation. arXiv preprint arXiv:2104.12820. Cited by: item 3.
  • [15] M. Chang, J. W. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett, D. Wang, P. Carr, S. Lucey, D. Ramanan, and J. Hays (2019) Argoverse: 3d tracking and forecasting with rich maps. In Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §5.
  • [16] P. Corke, F. Dayoub, D. Hall, J. Skinner, and N. Sünderhauf (2020) What can robotics research learn from computer vision research?. arXiv preprint arXiv:2001.02366. Cited by: §5.
  • [17] F. Doshi-Velez and B. Kim (2017) Towards A Rigorous Science of Interpretable Machine Learning. arXiv. External Links: Link Cited by: item 5.
  • [18] A. Ghadirzadeh, X. Chen, P. Poklukar, C. Finn, M. Björkman, and D. Kragic (2021) Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic Platforms. External Links: 2103.03697 Cited by: item 1.
  • [19] D. Hafner, T. Lillicrap, M. Norouzi, and J. Ba (2020) Mastering atari with discrete world models. arXiv preprint arXiv:2010.02193. Cited by: item 2, §3.2.
  • [20] J. Hawke, R. Shen, C. Gurau, S. Sharma, D. Reda, N. Nikolov, P. Mazur, S. Micklethwaite, N. Griffiths, A. Shah, et al. (2020)

    Urban driving with conditional imitation learning

    .
    In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 251–257. Cited by: §3.2.
  • [21] N. Hendy, C. Sloan, F. Tian, P. Duan, N. Charchut, Y. Xie, C. Wang, and J. Philbin (2020) FISHING net: future inference of semantic heatmaps in grids. arXiv preprint arXiv:2006.09917. Cited by: §2.
  • [22] I. Higgins, D. Amos, D. Pfau, S. Racaniere, L. Matthey, D. Rezende, and A. Lerchner (2018) Towards a definition of disentangled representations. arXiv preprint arXiv:1812.02230. Cited by: item 5.
  • [23] J. Houston, G. Zuidhof, L. Bergamini, Y. Ye, A. Jain, S. Omari, V. Iglovikov, and P. Ondruska (2020) One thousand and one hours: self-driving motion prediction dataset. Note: https://level5.lyft.com/dataset/ Cited by: §2, §5.
  • [24] A. Hu, Z. Murez, N. Mohan, S. Dudas, J. Hawke, V. Badrinarayanan, R. Cipolla, and A. Kendall (2021) FIERY: Future Instance Prediction in Bird’s-Eye View from Surround Monocular Cameras. arXiv preprint arXiv:2104.10490. Cited by: §2.
  • [25] C. Hubmann, M. Becker, D. Althoff, D. Lenz, and C. Stiller (2017) Decision making for autonomous driving considering interaction and uncertain prediction of surrounding vehicles. In 2017 IEEE Intelligent Vehicles Symposium (IV), Vol. , pp. 1671–1678. External Links: Document Cited by: item 2, item 4.
  • [26] A. Jain, K. Khetarpal, and D. Precup (2021) Safe option-critic: learning safety in the option-critic architecture.

    The Knowledge Engineering Review

    36, pp. e4.
    External Links: Document Cited by: item 4.
  • [27] A. Kendall, J. Hawke, D. Janz, P. Mazur, D. Reda, J. Allen, V. Lam, A. Bewley, and A. Shah (2019) Learning to drive in a day. In 2019 International Conference on Robotics and Automation (ICRA), pp. 8248–8254. Cited by: item 7.
  • [28] P. Koopman and M. Wagner (2016) Challenges in autonomous vehicle testing and validation. SAE International Journal of Transportation Safety 4 (1), pp. 15–24. Cited by: item 1.
  • [29] A. Kumar, Z. Fu, D. Pathak, and J. Malik (2021) RMA: Rapid Motor Adaptation for Legged Robots. External Links: 2107.04034 Cited by: item 1.
  • [30] Law Commission (2021) Automated Vehicles. Note: Accessed: 2021-06-09 External Links: Link Cited by: §1.
  • [31] S. Levine, A. Kumar, G. Tucker, and J. Fu (2020)

    Offline reinforcement learning: Tutorial, review, and perspectives on open problems

    .
    arXiv preprint arXiv:2005.01643. Cited by: item 3.
  • [32] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra (2016) Continuous control with deep reinforcement learning.. In International Conference on Learning Representations, Cited by: item 7.
  • [33] (2020) Mastering atari, go, chess and shogi by planning with a learned model. Nature 588 (7839), pp. 604–609. External Links: Document, ISBN 1476-4687, Link Cited by: §3.1.
  • [34] R. McAllister, Y. Gal, A. Kendall, M. Van Der Wilk, A. Shah, R. Cipolla, and A. Weller (2017) Concrete problems for autonomous vehicle safety: Advantages of Bayesian deep learning. Cited by: item 4.
  • [35] B. Osiński, P. Miłoś, A. Jakubowski, P. Zięcina, M. Martyniak, C. Galias, A. Breuer, S. Homoceanu, and H. Michalewski (2020) CARLA real traffic scenarios–novel training ground and benchmark for autonomous driving. arXiv preprint arXiv:2012.11329. Cited by: §5.
  • [36] B. Paden, M. Čáp, S. Z. Yong, D. Yershov, and E. Frazzoli (2016) A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles. IEEE Transactions on Intelligent Vehicles 1 (1), pp. 33–55. External Links: Document Cited by: item 3b.
  • [37] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell (2017) Curiosity-driven exploration by self-supervised prediction. In ICML, Cited by: item 6.
  • [38] F. Pereira, P. Norvig, and A. Halevy (2009-03) The Unreasonable Effectiveness of Data. IEEE Intelligent Systems 24 (02), pp. 8–12. External Links: ISSN 1941-1294, Document Cited by: §3.1.
  • [39] J. Phillips, J. Martinez, I. A. Bârsan, S. Casas, A. Sadat, and R. Urtasun (2021) Deep Multi-Task Learning for Joint Localization, Perception, and Prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4679–4689. Cited by: §3.2.
  • [40] D. A. Pomerleau (1989) Alvinn: an autonomous land vehicle in a neural network. In Advances in neural information processing systems, pp. 305–313. Cited by: §3.1.
  • [41] J. Reason (2000) Human error: models and management. BMJ 320 (7237), pp. 768–770. External Links: Document, ISSN 0959-8138, Link, https://www.bmj.com/content/320/7237/768.full.pdf Cited by: item 2.
  • [42] Reuters (2017-12-04) Self-driving costs could drop 90 percent by 2025, delphi ceo says. Note: Accessed: 2021-06-02 External Links: Link Cited by: §1.
  • [43] S. Russell and P. Norvig (2009) Artificial Intelligence: A Modern Approach. 3rd edition, Prentice Hall Press, USA. External Links: ISBN 0136042597 Cited by: item 3a.
  • [44] B. Schölkopf, F. Locatello, S. Bauer, N. R. Ke, N. Kalchbrenner, A. Goyal, and Y. Bengio (2021) Toward Causal Representation Learning. Proceedings of the IEEE 109 (5), pp. 612–634. External Links: Document Cited by: item 5.
  • [45] M. Siegel (2003) The sense-think-act paradigm revisited. In 1st International Workshop on Robotic Sensing, 2003. ROSE’ 03., Vol. , pp. 5 pp.–. External Links: Document Cited by: §2.
  • [46] P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, et al. (2020) Scalability in perception for autonomous driving: waymo open dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2446–2454. Cited by: §2, §5.
  • [47] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, J. Oh, D. Horgan, M. Kroiss, I. Danihelka, A. Huang, L. Sifre, T. Cai, J. P. Agapiou, M. Jaderberg, A. S. Vezhnevets, R. Leblond, T. Pohlen, V. Dalibard, D. Budden, Y. Sulsky, J. Molloy, T. L. Paine, C. Gulcehre, Z. Wang, T. Pfaff, Y. Wu, R. Ring, D. Yogatama, D. Wünsch, K. McKinney, O. Smith, T. Schaul, T. Lillicrap, K. Kavukcuoglu, D. Hassabis, C. Apps, and D. Silver (2019) Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature 575 (7782), pp. 350–354. External Links: Document, ISBN 1476-4687, Link Cited by: §3.1.
  • [48] Waymo LLC (2020) Waymo Public Road Safety Performance Data. Note: Accessed: 2021-06-06 External Links: Link Cited by: item 3.
  • [49] E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda (2020) A survey of autonomous driving: common practices and emerging technologies. IEEE Access 8 (), pp. 58443–58469. External Links: Document Cited by: §2.
  • [50] A. Zhang, C. Lyle, S. Sodhani, A. Filos, M. Kwiatkowska, J. Pineau, Y. Gal, and D. Precup (2020-13–18 Jul) Invariant causal prediction for block MDPs. In Proceedings of the 37th International Conference on Machine Learning, H. D. III and A. Singh (Eds.), Proceedings of Machine Learning Research, Vol. 119, pp. 11214–11224. External Links: Link Cited by: item 6.
  • [51] H. Zhao, J. Gao, T. Lan, C. Sun, B. Sapp, B. Varadarajan, Y. Shen, Y. Shen, Y. Chai, C. Schmid, et al. (2020) Tnt: target-driven trajectory prediction. arXiv preprint arXiv:2008.08294. Cited by: §2.
  • [52] B. Zhou, P. Krähenbühl, and V. Koltun (2019) Does computer vision matter for action?. Science Robotics 4 (30). External Links: Document, Link, https://robotics.sciencemag.org/content/4/30/eaaw6661.full.pdf Cited by: §3.2.