1 Introduction
Endowing a robotic car with the ability to form long term driving strategies, referred to as “Driving Policy”, is key for enabling fully autonomous driving. The process of sensing , i.e., the process of forming an environmental model consisting of location of all moving and stationary objects, the position and type of path delimiters (such as curbs, barriers, and so forth), all drivable paths with their semantic meaning and all traffic signs and traffic lights around the car — is well defined. While sensing is well understood, the definition of Driving Policy, its underlying assumptions, and its functional breakdown is less understood. The extent of the challenge to form driving strategies that mimic human drivers is underscored by the flurry of media reports on the simplistic driving policies exhibited by current autonomous test vehicles by various practitioners (e.g. Naughton (2015)). In order to support autonomous capabilities a robotic driven vehicle should adopt human driving negotiation skills when overtaking, giving way, merging, taking left and right turns and while pushing ahead in unstructured urban roadways. Since there are many possible scenarios, manually tackling all possible cases will likely yield a too simplistic policy. Moreover, one must balance between unexpected behavior of other drivers/pedestrians and at the same time not to be too defensive so that normal traffic flow is maintained.
These challenges naturally suggest using machine learning approaches. Traditionally, machine learning approaches for planning strategies are studied under the framework of Reinforcement Learning (RL) — see Bertsekas (1995); Kaelbling et al. (1996); Sutton and Barto (1998); Szepesvári (2010) for a general overview and Kober et al. (2013)
for a comprehensive review of reinforcement learning in robotics. Using machine learning, and specifically RL, raises two concerns which we address in this paper. The first is about ensuring functional safety of the Driving Policy — something that machine learning has difficulty with given that performance is optimized at the level of an expectation over many instances. Namely, given the very low probability of an accident the only way to guarantee safety is by scaling up the variance of the parameters to be estimated and the sample complexity of the learning problem — to a degree which becomes unwieldy to solve. Second, the Markov Decision Process model often used in robotics is problematic in our case because of unpredictable behavior of other agents in this multiagent scenario.
Before explaining our approach for tackling these difficulties, we briefly describe the key idea behind most common reinforcement learning algorithms. Typically, RL is performed in a sequence of consecutive rounds. At round , the agent (a.k.a planner) observes a state, , which represents the sensing state of the system, i.e., the environmental model as mentioned above. It then should decide on an action . After performing the action, the agent receives an immediate reward, , and is moved to a new state, . The goal of the planner is to maximize the cumulative reward (maybe up to a time horizon or a discounted sum of future rewards). To do so, the planner relies on a policy, , which maps a state into an action.
Most of the RL algorithms rely in some way or another on the mathematically elegant model of a Markov Decision Process (MDP), pioneered by the work of Bellman Bellman (1956, 1971). The Markovian assumption is that the distribution of is fully determined given and
. This yields a closed form expression for the cumulative reward of a given policy in terms of the stationary distribution over states of the MDP. The stationary distribution of a policy can be expressed as a solution to a linear programming problem. This yields two families of algorithms: optimizing with respect to the primal problem, which is called policy search, and optimizing with respect to the dual problem, whose variables are called the
value function, . The value function determines the expected cumulative reward if we start the MDP from the initial state , and from there on pick actions according to . A related quantity is the stateaction value function, , which determines the cumulative reward if we start from state , immediately pick action , and from there on pick actions according to . The function gives rise to a crisp characterization of the optimal policy (using the so called Bellman’s equation), and in particular it shows that the optimal policy is a deterministic function from to (in fact, it is the greedy policy with respect to the optimal function).In a sense, the key advantage of the MDP model is that it allows us to couple all the future into the present using the function. That is, given that we are now in state , the value of tells us the effect of performing action
at the moment on the entire future. Therefore, the
function gives us a local measure of the quality of an action , thus making the RL problem more similar to supervised learning.Most reinforcement learning algorithms approximate the function or the function in one way or another. Value iteration algorithms, e.g. the learning algorithm Watkins and Dayan (1992), relies on the fact that the and functions of the optimal policy are fixed points of some operators derived from Bellman’s equation. Actorcritic policy iteration algorithms aim to learn a policy in an iterative way, where at iteration , the “critic” estimates and based on this, the “actor” improves the policy.
Despite the mathematical elegancy of MDPs and the conveniency of switching to the function representation, there are several limitations of this approach. First, as noted in Kober et al. (2013), usually in robotics, we may only be able to find some approximate notion of a Markovian behaving state. Furthermore, the transition of states depends not only on the agent’s action, but also on actions of other players in the environment. For example, in the context of autonomous driving, while the dynamic of the autonomous vehicle is clearly Markovian, the next state depends on the behavior of the other road users (vehicles, pedestrians, cyclists), which is not necessarily Markovian. One possible solution to this problem is to use partially observed MDPs White III (1991), in which we still assume that there is a Markovian state, but we only get to see an observation that is distributed according to the hidden state. A more direct approach considers game theoretical generalizations of MDPs, for example the Stochastic Games framework. Indeed, some of the algorithms for MDPs were generalized to multiagents games. For example, the minimaxQ learning Littman (1994) or the NashQ learning Hu and Wellman (2003). Other approaches to Stochastic Games are explicit modeling of the other players, that goes back to Brown’s fictitious play Brown (1951), and vanishing regret learning algorithms Hart and MasColell (2000); CesaBianchi and Lugosi (2006). See also Uther and Veloso (1997); Thrun (1995); Kearns and Singh (2002); Brafman and Tennenholtz (2003). As noted in Shoham et al. (2007), learning in multiagent setting is inherently more complex than in the single agent setting. Taken together, in the context of autonomous driving, given the unpredictable behavior of other road users, the MDP framework and its extensions are problematic in the least and could yield impractical RL algorithms.
When it comes to categories of RL algorithms and how they handle the Markov assumption, we can divide them into four groups:

Algorithms that estimate the Value or Q function – those clearly are defined solely in the context of MDP.

Policy based learning methods where, for example, the gradient of the policy is estimated using the likelihood ratio trick (cf. Aleksandrov et al. (1968); Sutton et al. (1999a); Peters and Schaal (2008)) and thereby the learning of is an iterative process where at each iteration the agent interacts with the environment while acting based on the current Policy estimation. Policy gradient methods are derived using the Markov assumption, but we will see later that this is not necessarily required.

Algorithms that learn the dynamics of the process, namely, the function that takes and yields a distribution over the next state . Those are known as Modelbased methods, and those clearly rely on the Markov assumption.

Behavior cloning (Imitation) methods. The Imitation approach simply requires a training set of examples of the form , where is the action of the human driver (cf. Bojarski et al. (2016)). One can then use supervised learning to learn a policy such that . Clearly there is no Markov assumption involved in the process. The problem with Imitation is that different human drivers, and even the same human, are not deterministic in their policy choices. Hence, learning a function for which is very small is often infeasible. And, once we have small errors, they might accumulate over time and yield large errors.
Our first observation (detailed in sec. 2) is that Policy Gradient does not really require the Markov assumption and furthermore that some methods for reducing the variance of the gradient estimator (cf. Schulman et al. (2015)) would not require Markov assumptions as well. Taken together, the RL algorithm could be initialized through Imitation and then updated using an iterative Policy Gradient approach without the Markov assumption.
The second contribution of the paper is a method for guaranteeing functional safety of the Driving Policy outcome. Given the very small probability of an accident, the corresponding reward of a trajectory leading to an accident should be much smaller than , thus generating a very high variance of the gradient estimator (see Lemma 3). Regardless of the means of reducing variance, as detailed in sec. 2, the variance of the gradient not only depends on the behavior of the reward but also on the horizon (time steps) required for making decisions. Our proposal for functional safety is twofold. First we decompose the Policy function into a composition of a Policy for Desires (which is to be learned) and trajectory planning with hard constraints (which is not learned). The goal of Desires is to enable comfort of driving, while hard constraints guarantees the safety of driving (detailed in sec. 4). Second, following the options mechanism of Sutton et al. (1999b) we employ a hierarchical temporal abstraction we call an “Option Graph” with a gating mechanism that significantly reduces the effective horizon and thereby reducing the variance of the gradient estimation even further (detailed in sec. 5). The Option Graph plays a similar role to “structured prediction” in supervised learning (e.g. Taskar et al. (2005)), thereby reducing sample complexity, while also playing a similar role to LSTM Hochreiter and Schmidhuber (1997) gating mechanisms used in supervised deep networks. The use of options for skill reuse was also been recently studied in Tessler et al. (2016), where hierarchical deep Q networks for skill reuse have been proposed. Finally, in sec. 6, we demonstrate the application of our algorithm on a double merging maneuver which is notoriously difficult to execute using conventional motion and path planning approaches.
Safe reinforcement learning was also studied recently in Ammar et al. (2015)
. Their approach involves first optimizing the expected reward (using policy gradient) and then applying a (Bregman) projection of the solution onto a set of linear constraints. This approach is different from our approach. In particular, it assumes that the hard constraints on safety can be expressed as linear constraints on the parameter vector
. In our case are the weights of a deep network and the hard constraints involve highly nonlinear dependency on . Therefore, convexbased approaches are not applicable to our problem.2 Reinforcement Learning without Markovian Assumption
We begin with setting up the RL notations geared towards deriving a Policy Gradient method with variance reduction while not making any Markov assumptions. We follow the REINFORCE Williams (1992) likelihood ratio trick and make a very modest contribution — more an observation than a novel derivation — that Markov assumptions on the environment are not required. Let be our state space which contains the “environmental model” around the vehicle generated from interpreting sensory information and any additional useful information such as the kinematics of moving objects from previous frames. We use the term “state space” in order not to introduce new terminology but we actually mean a state vector in an agnostic sense, without the Markov assumptions — simply a collection of information around the vehicle generated at a particular time stamp. Let denote the action space, where at this point we will keep it abstract and later in sec. 5 we will introduce a specific discrete action space for selecting “desires” tailored to the domain of autonomous driving. The hypothesis class of parametric stochastic policies is denoted by , where for all we have , and we assume that is differentiable w.r.t. . Note that we have chosen a class of policies as part of an architectural design choice, i.e., that the (distribution over) action at time is determined by the agnostic state and in particular, given the differentiability over , the policy is implemented by a deep layered network. In other words, we are not claiming that the optimal policy is necessarily contained in the hypothesis class but that “good enough” policies can be modeled using a deep network whose input layer consists of . The theory below does not depend on the nature of the hypothesis class and any other design choices can be substituted — for example,
would correspond to the class of recurrent neural networks (RNN).
Let define a sequence (trajectory) of stateaction over a time period sufficient for longterm planning, and let denote a subtrajectory from time stamp to time stamp . Let be the probability of trajectory when actions are chosen according to the policy and there are no other assumptions on the environment. The total reward associated with the trajectory is denoted by which can be any function of . For example, can be a function of the immediate rewards, , such as or the discounted reward for . But, any reward function of can be used and therefore we can keep it abstract. Finally, the learning problem is:
The gradient policy theorem below follows the standard likelihood ratio trick (e.g., Aleksandrov et al. (1968); Glynn (1987)) and the formula is well known, but in the proof (which follows the proof in Peters and Schaal (2008)), we make the observation that Markov assumptions on the environment are not required for the validity of the policy gradient estimator:
Theorem 1
Denote
(1) 
Then, .
The gradient policy theorem shows that it is possible to obtain an unbiased estimate of the gradient of the expected total reward
Williams (1992); Sutton et al. (1999a); Greensmith et al. (2004), thereby using noisy gradient estimates in a stochastic gradient ascent/descent (SGD) algorithm for training a deep network representing the policy . Unfortunately, the variance of the gradient estimator scales unfavorably with the time horizon and moreover due to the very low probability of critical “corner” cases, such as the probability of an accident , the immediate reward must satisfyand in turn the variance of the random variable
grows with , i.e., much larger than (see sec. 4 and Lemma 3). High variance of the gradient has a detrimental effect on the convergence rate of SGD Moulines and Bach (2011); ShalevShwartz and Zhang (2013); Johnson and Zhang (2013); ShalevShwartz (2016); Needell et al. (2014) and given the nature of our problem domain, with extremely lowprobability corner cases, the effect of an extremely high variance could bring about bad policy solutions.We approach the variance problem along three thrusts. First, we use baseline subtraction methods (which goes back to Williams (1992)) for variance reduction. Second, we deal with the variance due to “corner” cases by decomposing the policy into a learnable part and a nonlearnable part, the latter induces hard constraints on functional safety. Last, we introduce a temporal abstraction method with a gating mechanism we call an “option graph” to ameliorate the effect of the time horizon on the variance. In Section 3 we focus on baseline subtraction, derive the optimal baseline (following Peters and Schaal (2008)) and generalize the recent results of Schulman et al. (2015) to a nonMarkovian setting. In the next section we deal with variance due to “corner cases”.
3 Variance Reduction
Consider again policy gradient estimate introduced in eqn.1. The baseline subtraction method reduces the variance of by subtracting a scalar from :
The Lemma below describes the conditions on for the baseline subtraction to work:
Lemma 1
For every and , let be a scalar that does not depend on , but may depend on and on . Then,
The optimal baseline, one that would reduce the variance the most, can be derived following Peters and Schaal (2008):
Taking the derivative w.r.t. and comparing to zero we obtain the following equation for the optimal baseline:
This can be written as , where is a matrix with and is a dimensional vector with . We can estimate and from a minibatch of episodes and then set to be
. A more efficient approach is to think about the problem of finding the baseline as an online linear regression problem and have a separate process that update
in an online manner ShalevShwartz (2011).Many policy gradient variants Schulman et al. (2015) replace with the Qfunction, which assumes the Markovian setting. The following lemma gives a nonMarkovian analogue of the function.
Lemma 2
Define
(2) 
Let be a random variable and let be a function such that (in particular, we can take to be empty and then ). Then,
Observe that the following analogue of the value function for the nonMarkovian setting,
(3) 
satisfies the conditions of Lemma 1. Therefore, we can also replace with an analogue of the socalled Advantage function, . The advantage function, and generalization of it, are often used in actorcritique policy gradient implementations (see for example Schulman et al. (2015)). In the nonMarkovian setting considered in this paper, the Advantage function is more complicated to estimate, and therefore, in our experiments, we use estimators that involve the term , where is estimated using online linear regression.
4 Safe Reinforcement Learning
In the previous section we have shown how to optimize the reinforcement learning objective by policy stochastic gradient ascent. Recall that we have defined the objective to be , that is, the expected reward. Objectives that involve expectation are common in machine learning. We now argue that this objective poses a functional safety problem.
Consider a reward function for which for trajectories that represent a rare “corner” event which we would like to avoid, such as an accident, and for the rest of the trajectories. For concreteness, suppose that our goal is to learn to perform an overtake maneuver. Normally, in an accident free trajectory, would reward successful, smooth, takeovers and penalize against staying in lane without completing the takeover — hence the range . If a sequence, , represents an accident, we would like the reward to provide a sufficiently high penalty to discourage such occurrences. The question is what should be the value of to ensure accidentfree driving?
Observe that the effect of an accident on is the additive term , where is the probability mass of trajectories with an accident event. If this term is negligible, i.e., , then the learner might prefer a policy that performs an accident (or adopt in general a reckless driving policy) in order to fulfill the takeover maneuver successfully more often than a policy that would be more defensive at the expense of having some takeover maneuvers not completed successfully. In other words, if we want to make sure that the probability of accidents is at most then we must set . Since we would like to be extremely small (say, ), we obtain that must be extremely large. Recall that in policy gradient we estimate the gradient of . The following lemma shows that the variance of the random variable grows with , which is larger than for . Hence, even estimating the objective is difficult, let alone its gradient.
Lemma 3
Let be a policy and let be scalars such that with probability we have and with probability we have . Then,
where the last approximation holds for the case .
The above discussion shows that an objective of the form cannot ensure functional safety without causing a serious variance problem. The baseline subtraction method for variance reduction would not offer a sufficient remedy to the problem because we would be shifting the problem from very high variance of to equally high variance of the baseline constants whose estimation would equally suffer numerical instabilities. Moreover, if the probability of an accident is then on average we should sample at least sequences before obtaining an accident event. This immediately implies a lower bound of samples of sequences for any learning algorithm that aims at minimizing . We therefore face a fundamental problem whose solution must be found in a new architectural design and formalism of the system rather than through numerical conditioning tricks.
Our approach is based on the notion that hard constraints should be injected outside of the learning framework. In other words, we decompose the policy function into a learnable part and a nonlearnable part. Formally, we structure the policy function as , where maps the (agnostic) state space into a set of Desires, while maps the Desires into a trajectory (which determines how the car should move in a short range). The function is responsible for the comfort of driving and for making strategical decisions such as which other cars should be overtaken or given way and what is the desired position of the host car within its lane and so forth. The mapping from state to Desires is a policy that is being learned from experience by maximizing an expected reward. The desires produced by are translated into a cost function over driving trajectories. The function , which is not being learned, is implemented by finding a trajectory that minimizes the aforementioned cost subject to hard constraints on functional safety. This decomposition allows us to always ensure functional safety while at the same time enjoying comfort driving most of the time.
To illustrate the idea, let us consider a challenging driving scenario, which we call the double merge scenario (see Figure 1
for an illustration). In a double merge, vehicles approach the merge area from both left and right sides and, from each side, a vehicle can decide whether to merge into the other side or not. Successfully executing a double merge in busy traffic requires significant negotiation skills and experience and is notoriously difficult to execute in a heuristic or brute force approach by enumerating all possible trajectories that could be taken by all agents in the scene.
We begin by defining the set of Desires appropriate for the double merge maneuver. Let be the Cartesian product of the following sets:
where is the desired target speed of the host vehicle, is the desired lateral position in lanes units where whole numbers designate lane center and fraction numbers designate lane boundaries, and are classification labels assigned to each of the other vehicles. Each of the other vehicles is assigned ‘g’ if the host vehicle is to “give way” to it, or ‘t’ to “take way” and ‘o’ to maintain an offset distance to it.
Next we describe how to translate a set of Desires, , into a cost function over driving trajectories. A driving trajectory is represented by , where is the (lateral,longitudinal) location of the car (in egocentric units) at time . In our experiments, we set and . The cost assigned to a trajectory will be a weighted sum of individual costs assigned to the desired speed, lateral position, and the label assigned to each of the other vehicles. Each of the individual costs are descried below.
Given a desired speed , the cost of a trajectory associated with speed is . Given desired lateral position , the cost associated with desired lateral position is , where is the distance from the point to the lane position . As to the cost due to other vehicles, for any other vehicle let be its predicted trajectory in the host vehicle egocentric units, and let be the earliest point for which there exists such that the distance between and is small (if there is no such point we let
). If the car is classified as “giveway” we would like that
, meaning that we will arrive to the trajectory intersection point at least seconds after the other vehicle will arrive to that point. A possible formula for translating the above constraint into a cost is . Likewise, if the car is classified as “takeway” we would like that , which is translated to the cost . Finally, if the car is classified as “offset” we would like that will be (meaning, the trajectories will not intersect). This can be translated to a cost by penalizing on the distance between the trajectories.By assigning a weight to each of the aforementioned costs we obtain a single objective function for the trajectory planner, . Naturally, we can also add to the objective a cost that encourages smooth driving. More importantly, we add hard constraints that ensure functional safety of the trajectory. For example, we do not allow to be off the roadway and we do not allow to be close to for any trajectory point of any other vehicle if is small.
To summarize, we decompose the policy into a mapping from the agnostic state to a set of Desires and a mapping from the Desires to an actual trajectory. The latter mapping is not being learned and is implemented by solving an optimization problem whose cost depends on the Desires and whose hard constraints guarantees functional safety of the policy. It is left to explain how we learn the mapping from the agnostic state to the Desires, which is the topic of the next section.
5 Temporal Abstraction
In the previous section we injected prior knowledge in order to break down the problem in such a way to ensure functional safety. We saw that through RL alone a system complying with functional safety will suffer a very high and unwieldy variance on the reward and this can be fixed by splitting the problem formulation into a mapping from (agnostic) state space to Desires using policy gradient iterations followed by a mapping to the actual trajectory which does not involve learning. It is necessary, however, to inject even more prior knowledge into the problem and decompose the decision making into semantically meaningful components — and this for two reasons. First, the size of might be quite large and even continuous (in the doublemerge scenario described in the previous section we had ). Second, the gradient estimator involves the term . As mentioned above, the variance grows with the time horizon Peters and Schaal (2008) . In our case, the value of is roughly^{1}^{1}1Suppose we work at 10Hz, the merge area is 100 meters, we start the preparation for the merge 300 meters before it, and we drive at 16 meters per second ( 60 Km per hour). In this case, the value of for an episode is roughly . 250 which is high enough to create significant variance.
Our approach follows the options framework due to Sutton et al. (1999c). An options graph represents a hierarchical set of decisions organized as a Directed Acyclic Graph (DAG). There is a special node called the “root” of the graph. The root node is the only node that has no incoming edges. The decision process traverses the graph, starting from the root node, until it reaches a “leaf” node, namely, a node that the has no outgoing edges. Each internal node should implement a policy function that picks a child among its available children. There is a predefined mapping from the set of traversals over the options graph to the set of desires, . In other words, a traversal on the options graph is automatically translated into a desire in . Given a node in the graph, we denote by the parameter vector that specifies the policy of picking a child of . Let be the concatenation of all the , then is defined by traversing from the root of the graph to a leaf, while at each node using the policy defined by to pick a child node.
A possible option graph for the double merge scenario is depicted in Figure 2. The root node first decides if we are within the merging area or if we are approaching it and need to prepare for it. In both cases, we need to decide whether to change lane (to left or right side) or to stay in lane. If we have decided to change lane we need to decide if we can go on and perform the lane change maneuver (the “go” node). If it is not possible, we can try to “push” our way (by aiming at being on the lane mark) or to “stay” in the same lane. This determines the desired lateral position in a natural way — for example, if we change lane from lane to lane , “go” sets the desire lateral position to , “stay” sets the desire lateral position to , and “push” sets the desire lateral position to . Next, we should decide whether to keep the “same” speed, “accelerate” or “decelerate”. Finally, we enter a “chain like” structure that goes over all the vehicles and sets their semantic meaning to a value in . This sets the desires for the semantic meaning of vehicles in an obvious way. Note that we share the parameters of all the nodes in this chain (similarly to Recurrent Neural Networks).
An immediate benefit of the options graph is the interpretability of the results. Another immediate benefit is that we rely on the decomposable structure of the set and therefore the policy at each node should choose between a small number of possibilities. Finally, the structure allows us to reduce the variance of the policy gradient estimator. We next elaborate on the last point.
As mentioned previously, the length of an episode in the double merge scenario is roughly steps. This number comes from the fact that on one hand we would like to have enough time to see the consequences of our actions (e.g., if we decided to change lane as a preparation for the merge, we will see the benefit only after a successful completion of the merge), while on the other hand due to the dynamic of driving, we must make decisions at a fast enough frequency (Hz in our case). The options graph enables us to decrease the effective value of by two complementary ways. First, given higher level decisions we can define a reward for lower level decisions while taking into account much shorter episodes. For example, when we have already picked the “lane change” and “go” nodes, we can learn the policy for assigning semantic meaning to vehicles by looking at episodes of 23 seconds (meaning that becomes instead of ). Second, for high level decisions (such as whether to change lane or to stay in the same lane), we do not need to make decisions every seconds. Instead, we can either make decisions at a lower frequency (e.g., every second), or implement an “option termination” function, and then the gradient is calculated only after every termination of the option. In both cases, the effective value of is again an order of magnitude smaller than its original value. All in all, the estimator at every node depends on a value of which is an order of magnitude smaller than the original steps, which immediately transfers to a smaller variance Mann et al. (2015).
To summarize, we introduced the options graph as a way to breakdown the problem into semantically meaningful components where the Desires are defined through a traversal over a DAG. At each step along the way the learner maps the state space to a small subset of Desires thereby effectively decreasing the time horizon to much smaller sequences and at the same time reducing the output space for the learning problem. The aggregated effect is both in reducing variance and sample complexity of the learning problem.
6 Experimental Demonstration
The purpose of this section is to give a sense of how a challenging negotiation scenario is handled by our framework. The experiment involves propriety software modules (to produce the sensing state, the simulation, and the trajectory planner) and data (for the learningbyimitation part). It therefore should be regarded as a demonstration rather than a reproducible experiment. We leave to future work the task of conducting a reproducible experiment, with a comparison to other approaches.
We experimented with the doublemerge scenario described in Section 4 (see again Figure 1). This is a challenging negotiation task as cars from both sides have a strong incentive to merge, and failure to merge in time leads to ending on the wrong side of the intersection. In addition, the reward associated with a trajectory needs to account not only for the success or failure of the merge operation but also for smoothness of the trajectory control and the comfort level of all other vehicles in the scene. In other words, the goal of the RL learner is not only to succeed with the merge maneuver but also accomplish it in a smooth manner and without disrupting the driving patterns of other vehicles.
We relied on the following sensing information. The static part of the environment is represented as the geometry of lanes and the free space (all in egocentric units). Each agent also observes the location, velocity, and heading of every other car which is within 100 meters away from it. Finally, meters before the merging area the agent receives the side it should be after the merge (’left’ or ’right’). For the trajectory planner, , we used an optimization algorithm based on dynamic programming. We used the option graph described in Figure 2
. Recall that we should define a policy function for every node of our option graph. We initialized the policy at all nodes using imitation learning. Each policy function, associated with every node of the option graph, is represented by a neural network with three fully connected hidden layers. Note that data collected from a human driver only contains the final maneuver, but we do not observe a traversal on the option graph. For some of the nodes, we can infer the labels from the data in a relatively straight forward manner. For example, the classification of vehicles to “giveway”, “takeway”, and “offset” can be inferred from the future position of the host vehicle relative to the other vehicles. For the remaining nodes we used an implicit supervision. Namely, our option graph induces a probability over future trajectories and we train it by maximizing the (log) probability of the trajectory that was chosen by the human driver. Fortunately, deep learning is quite good in dealing with hidden variables and the imitation process succeeded to learn a reasonable initialization point for the policy. See
Sup for some videos. For the policy gradient updates we used a simulator (initialized using imitation learning) with selfplay enhancement. Namely, we partitioned the set of agents to two sets, and . Set was used as reference players while set was used for the policy gradient learning process. When the learning process converged, we used set as the reference players and used the set for the learning process. The alternating process of switching the roles of the sets continued for rounds. See Sup for resulting videos.Acknowledgements
We thank Moritz Werling, Daniel Althoff, and Andreas Lawitz for helpful discussions.
References
 [1] Supplementary video files. https://www.dropbox.com/s/136nbndtdyehtgi/doubleMerge.m4v?dl=0.
 Aleksandrov et al. [1968] VM Aleksandrov, VI Sysoev, and VV Shemeneva. Stochastic optimization of systems. Izv. Akad. Nauk SSSR, Tekh. Kibernetika, pages 14–19, 1968.
 Ammar et al. [2015] Haitham Bou Ammar, Rasul Tutunov, and Eric Eaton. Safe policy search for lifelong reinforcement learning with sublinear regret. The Journal of Machine Learning Research (JMLR), 2015.
 Bellman [1956] Richard Bellman. Dynamic programming and lagrange multipliers. Proceedings of the National Academy of Sciences of the United States of America, 42(10):767, 1956.
 Bellman [1971] Richard Bellman. Introduction to the mathematical theory of control processes, volume 2. IMA, 1971.
 Bertsekas [1995] Dimitri P Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientific Belmont, MA, 1995.
 Bojarski et al. [2016] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for selfdriving cars. arXiv preprint arXiv:1604.07316, 2016.
 Brafman and Tennenholtz [2003] Ronen I Brafman and Moshe Tennenholtz. Rmax–a general polynomial time algorithm for nearoptimal reinforcement learning. The Journal of Machine Learning Research, 3:213–231, 2003.
 Brown [1951] George W Brown. Iterative solution of games by fictitious play. Activity analysis of production and allocation, 13(1):374–376, 1951.
 CesaBianchi and Lugosi [2006] N. CesaBianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
 Glynn [1987] Peter W Glynn. Likelilood ratio gradient estimation: an overview. In Proceedings of the 19th conference on Winter simulation, pages 366–375. ACM, 1987.
 Greensmith et al. [2004] Evan Greensmith, Peter L Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov):1471–1530, 2004.
 Hart and MasColell [2000] S. Hart and A. MasColell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5), 2000.
 Hochreiter and Schmidhuber [1997] Sepp Hochreiter and Jürgen Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.
 Hu and Wellman [2003] Junling Hu and Michael P Wellman. Nash qlearning for generalsum stochastic games. The Journal of Machine Learning Research, 4:1039–1069, 2003.

Johnson and Zhang [2013]
Rie Johnson and Tong Zhang.
Accelerating stochastic gradient descent using predictive variance reduction.
In Advances in Neural Information Processing Systems, pages 315–323, 2013. 
Kaelbling et al. [1996]
Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore.
Reinforcement learning: A survey.
Journal of artificial intelligence research
, pages 237–285, 1996.  Kearns and Singh [2002] Michael Kearns and Satinder Singh. Nearoptimal reinforcement learning in polynomial time. Machine Learning, 49(23):209–232, 2002.
 Kober et al. [2013] Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, page 0278364913495721, 2013.
 Littman [1994] Michael L Littman. Markov games as a framework for multiagent reinforcement learning. In Proceedings of the eleventh international conference on machine learning, volume 157, pages 157–163, 1994.
 Mann et al. [2015] Timothy A. Mann, Doina Precup, and Shie Mannor. Approximate value iteration with temporally extended actions. Journal of Artificial Intelligence Research, 2015.

Moulines and Bach [2011]
Eric Moulines and Francis R Bach.
Nonasymptotic analysis of stochastic approximation algorithms for machine learning.
In Advances in Neural Information Processing Systems, pages 451–459, 2011.  Naughton [2015] Keith Naughton. Human drivers are bumping into driverless cars and exposing a key flaw. http://www.autonews.com/article/20151218/OEM11/151219874/humandriversarebumpingintodriverlesscarsandexposingakeyflaw, 2015.
 Needell et al. [2014] Deanna Needell, Rachel Ward, and Nati Srebro. Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm. In Advances in Neural Information Processing Systems, pages 1017–1025, 2014.
 Peters and Schaal [2008] Jan Peters and Stefan Schaal. Reinforcement learning of motor skills with policy gradients. Neural networks, 21(4):682–697, 2008.
 Schulman et al. [2015] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. Highdimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015.
 ShalevShwartz [2011] Shai ShalevShwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2011.
 ShalevShwartz [2016] Shai ShalevShwartz. Sdca without duality, regularization, and individual convexity. ICML, 2016.
 ShalevShwartz and Zhang [2013] Shai ShalevShwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. The Journal of Machine Learning Research, 14(1):567–599, 2013.
 Shoham et al. [2007] Yoav Shoham, Rob Powers, and Trond Grenager. If multiagent learning is the answer, what is the question? Artificial Intelligence, 171(7):365–377, 2007.
 Sutton and Barto [1998] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
 Sutton et al. [1999a] Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pages 1057–1063, 1999a.
 Sutton et al. [1999b] Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semimdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):181–211, 1999b.
 Sutton et al. [1999c] R.S. Sutton, D. Precup, and S. Singh. Between mdps and semimdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112:181–211, 1999c.
 Szepesvári [2010] Csaba Szepesvári. Algorithms for reinforcement learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 4(1):1–103, 2010. URL http://www.ualberta.ca/~szepesva/RLBook.html.
 Taskar et al. [2005] Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. Learning structured prediction models: A large margin approach. In Proceedings of the 22nd international conference on Machine learning, pages 896–903. ACM, 2005.
 Tessler et al. [2016] Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in minecraft. arXiv preprint arXiv:1604.07255, 2016.
 Thrun [1995] S. Thrun. Learning to play the game of chess. In G. Tesauro, D. Touretzky, and T. Leen, editors, Advances in Neural Information Processing Systems (NIPS) 7, Cambridge, MA, 1995. MIT Press.
 Uther and Veloso [1997] William Uther and Manuela Veloso. Adversarial reinforcement learning. Technical report, Technical report, Carnegie Mellon University, 1997. Unpublished, 1997.
 Watkins and Dayan [1992] Christopher JCH Watkins and Peter Dayan. Qlearning. Machine learning, 8(34):279–292, 1992.
 White III [1991] Chelsea C White III. A survey of solution techniques for the partially observed markov decision process. Annals of Operations Research, 32(1):215–230, 1991.
 Williams [1992] Ronald J Williams. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(34):229–256, 1992.
Appendix A Proofs
a.1 Proof of Theorem 1
The policy
induces a probability distribution over sequences as follows: given a sequence
, we haveNote that in deriving the above expression we make no assumptions on . This stands in contrast to Markov Decision Processes, in which it is assumed that is independent on the past given . The only assumption we make is that the (random) choice of is solely based on , which comes from our architectural design choice of the hypothesis space of policy functions. The remainder of the proof employs the standard likelihood ratio trick (e.g., Aleksandrov et al. [1968], Glynn [1987]) with the observation that since does not depend on the parameters it gets eliminated in the policy gradient. This is detailed below for the sake of completeness:
(definition of expectation)  
(linearity of derivation)  
(derivative of the log)  
(linearity of derivative)  
This concludes our proof.
a.2 Proof of Lemma 1
By Lemma 4, the term in the parentheses is , which concludes our proof.
a.3 Proof of Lemma 2
a.4 Proof of Lemma 3
We have
and .
The claim follows since .
a.5 Technical Lemmas
Lemma 4
Suppose that is a function such that for every and we have . Then,
Proof