Active Altruism Learning and Information Sufficiency for Autonomous Driving

10/09/2021
by   Jack Geary, et al.
0

Safe interaction between vehicles requires the ability to choose actions that reveal the preferences of the other vehicles. Since exploratory actions often do not directly contribute to their objective, an interactive vehicle must also able to identify when it is appropriate to perform them. In this work we demonstrate how Active Learning methods can be used to incentivise an autonomous vehicle (AV) to choose actions that reveal information about the altruistic inclinations of another vehicle. We identify a property, Information Sufficiency, that a reward function should have in order to keep exploration from unnecessarily interfering with the pursuit of an objective. We empirically demonstrate that reward functions that do not have Information Sufficiency are prone to inadequate exploration, which can result in sub-optimal behaviour. We propose a reward definition that has Information Sufficiency, and show that it facilitates an AV choosing exploratory actions to estimate altruistic tendency, whilst also compensating for the possibility of conflicting beliefs between vehicles.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

10/07/2020

Modeling Human Driving Behavior in Highway Scenario using Inverse Reinforcement Learning

Human driving behavior modeling is of great importance for designing saf...
01/29/2019

Safe, Efficient, and Comfortable Velocity Control based on Reinforcement Learning for Autonomous Driving

A model used for velocity control during car following was proposed base...
09/11/2019

Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning

A common approach for defining a reward function for Multi-objective Rei...
06/26/2019

NeuroTrajectory: A Neuroevolutionary Approach to Local State Trajectory Learning for Autonomous Vehicles

Autonomous vehicles are controlled today either based on sequences of de...
07/14/2020

Altruistic Decision-Making for Autonomous Driving with Sparse Rewards

In order to drive effectively, a driver must be aware of how they can ex...
09/30/2021

Emergency Vehicles Audio Detection and Localization in Autonomous Driving

Emergency vehicles in service have right-of-way over all other vehicles....
11/17/2021

GFlowNet Foundations

Generative Flow Networks (GFlowNets) have been introduced as a method to...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

(a)
Behind Ahead
(b)
Figure 1. Lane change decision-making scenario; (a) The Row player (R, Purple) wants to merge into the Column player’s (C, Orange) lane. Both players would prefer to be ahead of the other after the manoeuvre is completed. R can attempt to merge ahead (), merge behind (), or perform an exploratory action (). C can choose to give way (Behind), or they can stay ahead of R (Ahead). (b) Reward matrix associated with the scenario.

Interaction arises in driving scenarios where vehicles must coordinate their behaviours in order to complete their objectives. Scenarios involving crossing an unsignalised intersection or merging into an occupied lane can require interaction between vehicles in order to be safely executed. In these scenarios the optimal behaviour of each vehicle can depend on the behaviours the other vehicles intend to execute. Therefore it is necessary that an autonomous vehicle (AV) in this setting be able to accurately estimate the preferences of the other vehicles and anticipate their intentions.

Preferences and intents are typically communicated through behaviour; drivers have access to explicit communication methods, such as turn signals, to communicate their intent. However, it is not uncommon for communication to be entirely implicit, with drivers conveying their preferences via their actions. For example, a driver may choose to accelerate in order to stop another vehicle from merging ahead of them. The ability to reliably interpret implicit signals can be integral to the execution of certain driving manoeuvres; highway drivers have been shown to be able to estimate when a vehicle intends to perform a lane change, without relying on an explicit communication signal Driggs-Campbell2016. This is an ability that is lacking in current autonomous driving systems. Therefore, for AVs to be able to interact effectively with human-controlled vehicles they must be able to infer the preferences and intents of other drivers without relying on explicit communication channels.

When decision-making is motivated by inference, an AV’s decisions can be motivated by the accuracy of the inferred value, for example choosing conservative actions in order to gather more observations. This can result in the AV behaving too defensively, or failing to infer anything at all. In order to address this, an AV needs to be able to identify when they have sufficient information to make a decision, and when further inference is necessary, and be able to choose actions that will provide evidence to support the inference if required.

In geary2020; geary2021 the authors model interactive driving scenarios as Stackelberg Games between the planning vehicle and the other vehicles on the road von1934. Each player has an associated parameter, , that indicates the player’s interest in the other players achieving their objectives. This formulation relates to the Game Theoretic notion of altruism andreoni1993, as well as Social Value Orientation (SVO)-based models proposed in Schwarting2019 which have garnered recent interest toghi2021a; toghi2021b. They show how this formulation can be used to efficiently compute interactive policies that account for each player’s preferences. However, in their experiments the value of for each player was presumed to be known a priori. This is an unrealistic requirement in the open world, and any AV that might use such a model would need to be able to infer this parameter value. Previous works have explored methods for performing inference on similar parameters (e.g., Schwarting2019). However, these approaches are often “passive”, in that they wait and see what a vehicle does, and uses this observational data to inform the inference. In practice, since these parameters affect how agents interact, often their value can only be accurately inferred through interaction. This requires the planning vehicle to choose an action and observe how the other vehicles respond. Relying on “passive” inference approaches may fail to infer the value, since the data gathered may lack any interaction and, therefore, be incomplete. These approaches can also result in overly passive behaviour by the AV.

In this work we demonstrate how Active Learning and Information Gathering methods can be incorporated into the model proposed in geary2021 in order to infer the value of . Using these approaches the planning vehicle’s reward function is augmented to incentivise actions that would reveal information about , even if they do not directly pursue the planning vehicle’s objective. We also demonstrate that our proposed method requires fewer assumptions on the reward matrix values than other methods, and that it can even operate when the players have conflicting beliefs over the role of leader and follower geary2021.

Typical Active Learning methods can result in an AV choosing actions in order to gain information that does not contribute to their objective. As such, it is important that any method that utilises Active Learning approaches is also able to determine when there is nothing more to be gained from further exploration. To this end we define Information Sufficiency, a property that a reward function should have in order for it to be suitable for use in Active Learning. We demonstrate that Active Learning approaches motivated by information gain do not have information sufficiency, and show that this can result in inadequate exploration and sub-optimal behaviour. We propose a novel alternative to information gain that does have the information sufficiency property, and we compare the performance of the two models.

The key contributions of this work are as follows:

  • Incorporating Active Learning methods in order to infer the value of in an Stackelberg Game-based model for interactive decision-making.

  • Identifying Information Sufficiency, a property a reward function should have in order to avoid unnecessary exploration. We also propose a novel reward function that has this property and demonstrate its effectiveness in a lane merge scenario.

  • Demonstrating how Conflict-awareness geary2021 can be incorporated into the decision-making, thereby reducing the assumptions required on the reward function.

2. Active Altruism Learning for Stackelberg Games

In this section we will introduce the Stackelberg Game formulation as well as the incorporation of Altruism as proposed in geary2021. We subsequently detail how Active Learning methods can be used to infer an unknown value for the altruism coefficient, , in this setting.

2.1. Altruism for Stackelberg Games

A Stackelberg (Leader-Follower) Game, von1934, is a simultaneous game formulation between two players where one player assumes the role of Leader, and the other the Follower. The Leader chooses the action that results in the best outcome for them, and the Follower chooses their best action, conditioned on the Leader’s expected action choice. The equilibrium for such games is unique and can be efficiently computed, making it very suitable for applications such as autonomous driving.

In geary2021 the authors address the autonomous driving planning problem with a hierarchical model; a decision-making model is used to determine what type of trajectory to execute, and a planning model is used to determine how it should be executed. Decision-making is formulated as a Bimatrix Stackelberg Game (e.g. Figure (b)b). In this context each player has a discrete set of actions which are known, and each action combination has associated with it a pair of rewards which would be received by the row and column players, respectively, if that combination of actions were realised. It is assumed that the row player, , is the leader, and that the column player, , will break ties in favour of the leader.

The altruistic tendency of a player, , is identified by a coefficient, which reweights the reward received by the player according to:

(1)

where indexes the player that is not . Values closer to indicate that the player is more motivated by achieving their own reward and is incentivised to behave selfishly, whereas values closer to indicate the player is incentivised by the other player’s success, and are motivated to behave more altruistically. In this work we assume that the leader is the planning (ego) agent, and the value is known. For simplicity, we let , and .

2.2. Parameter Inference for Stackelberg Games

In geary2021 the authors presumed the value of was known. In practice this is unlikely to be the case. Instead, the leader would maintain a belief over , , and their decision-making objective would be to choose the action that maximises the expected reward,

(2)

where returns the reward the would receive for choosing action when , under the assumption that (the follower in the Stackelberg game) will respond rationally. In this setting, if observes the equilibrium they can conclude that:

(3)

where is the set of actions available to , and returns the reward to for the specified action combination and value of . More generally, every pair of possible action combinations divides the domain of about the point of intersection of the corresponding reward functions .

Initially has no information about the value of , so

, the uniform distribution over the range

. Each iteration, , of decision-making defines a range in the domain, , in which the “true" value for lies. By determining the intersection of this range with the existing belief over the range of possible values, , we can generate an updated range within which the “true" value exists. We can then use this information to update the belief over :

(4)

We refer to this type for parameter inference as Passive, since the conclusions drawn about are incidental to the decision-making that would occur even if no inference were occurring. While this method can converge on reasonable ranges bounding , it is also possible for this method to fail entirely; for instance, if at iteration the leader chooses an uninformative action, such that , then the bounds do not change. As a consequence, the same action will be chosen in every subsequent round of decision-making, so no further progress will be made in inferring . In order to motivate progress on the inference task, Active methods should be used.

2.3. Active Information Gathering in Stackelberg Games

In Active Learning our goal is to incorporate parameter inference into action selection to improve the inference quality. One such approach is to identify a function, , to augment, , in order to motivate choosing actions that might provide more information about the parameter of interest,; sadigh2016. Compared to , motivates choosing more informative actions, which reduces the possibility of scenarios where the agent learns nothing about .

In sadigh2018; Furlanis2019 the authors use information gain to augment ; where is the entropy:

(5)

determines the degree to which is motivated by the information gathering task; if is low, then the agent will not be incentivised to choose informative actions. But if is too high, then could disregard their objective, and choose actions that are maximally informative, but also suboptimal or dangerous (sadigh2018 discuss this in greater detail). Determining how best to address this trade-off between exploring informative actions, and exploiting rewarding actions, is a topic of recent and ongoing research berger2014; Brooks2019.

In sadigh2018; Furlanis2019, these methods are applied at the planning level of the trajectory generation process. In this work we use a hierarchical trajectory generation process, and incorporate the active learning at the decision-making level instead. Unlike in the planning-level application, in our work we presume that the rewards associated with each action are fixed and known. As a result we can use an alternative approach for information gathering.

In practice, it is not necessarily best to gather as much information as possible, as is indicated by the entropy-based motivation. Instead the goal is to gather as much useful information as possible; after a certain point learning ceases to be beneficial, since the resulting information does not further the planning agent’s primary objective. This is not tied to the amount of information that is unknown, but how consequential the unknown information might be. With this motivation in mind, we propose an alternative definition for ; Expected Reward Gain:

(6)

measures the expected reward to for a given distribution over the value of , so this definition of measures how much stands to gain from exploring as opposed to directly pursuing their objective. This is related to Advantage

in Reinforcement Learning,

baird1994, where the difference between the estimated expected reward from a given state , and the expected reward from a particular action , , motivates the action selection. As opposed to information gain, where the scale of is independent of the magnitude of the , the scale of the expected reward gain is scaled by the magnitudes of the rewards. As a result we would expect a reduced dependence on the magnitude of . Expected reward gain would also be expected to have a low magnitude if there is little to be gained from further exploration, even if there is still high uncertainty over .

2.4. Active Altruism Learning for Stackelberg Games

(a)
Figure 2. Reward Matrix for the Information Gathering example

To demonstrate the different behaviours that arise from different definitions of we examine a simple example (Figure 2); action has the highest magnitude cost (-5), but also the highest reward (3) for the . Action is the “safe" action since, for any value of , will get a reward of . However, this action is also the least informative, as ’s action provides no information about their altruistic inclination. Action is the exploratory (“nudge" 111In this work we will sometimes refer to the exploratory action as a “nudge". This arises from sadigh2018 where the authors observed that one manifestation of exploratory behaviour in lane merges was the planning vehicle would “nudge" into the other lane before performing the manoeuvre as a way of testing the other vehicle’s response.) action, as it does not have the highest expected reward ( whereas ), but it is informative (the intersection point of the two rewards is at ) without assuming excessive risk.

(a)
(b)
(c)
Figure 3. Expected reward, based on the values in Figure 2, according to different definitions of , computed from ; (a) Passive update, ; (b) Information Gain, ; (c) Expected Reward Gain, . The green and red bars identify the actions with the highest and lowest values respectively.

Figure 3 depicts the expected values for for different definitions of , starting from . We observe from Figure (a)a that, with no other information, produces the highest immediate return. However, this action receives a reward of regardless of the value of , so this option provides no information. This is reflected in the fact that the magnitude of the bar is fixed in each sub-figure in Figure 3.

Figure (b)b depicts the expected returns when the reward is motivated by information gain; we observe that the values for both actions and have increased, but the value of is unchanged. This is because both of those actions result in intersections within the domain of possible values of , which does not occur for . In spite of this, the magnitude of increase is not sufficient, and the “safe" action is still preferred, so is not motivated to explore.

Figure (c)c, presents the expected returns when expected reward gain is used to augment the returns. In this case the augmented reward captures the tangible benefit associated to each action, not just the information learnt; even though has a lot of associated risk, it also provides a lot of gain to the expected reward since, as a consequence, will either; i) get a reward of and know to consistently get this reward () or ii) know that is more likely to have payoff () than to have a payoff of (). Thus ’s expected return now exceeds , for which there is no associated benefit. Similarly received a spike in value, although not as significant as . But, as there was less risk associated with initially, it comes out as being the preferred action. Therefore our method results in a “nudge" behaviour selection, as observed in sadigh2018.

The results used in Figure 3 were generated using . Alternative values could be used to demonstrate different results. This demonstrates the significance of the relationship between the effectiveness of this approach and the values of and the value of ; in the information gain definition is the expected value of rewards, which are scaled in relation to one another, and uses nats in its definition. In the expected reward gain definition, both and are defined using commonly scaled terms. The common units helps ensure that, , captures ’s exploration motivation, and does not have to compensate for differences in scale between units. We elaborate on the consequences of this on the overall behaviour in the next section.

3. Information Sufficiency

(a)
Figure 4. Reward Matrix for the Information Sufficiency example

Consider the reward matrix presented in Figure 4. It can be calculated that the intersection point for is , and for is (for clarity, in both cases this means that if chooses , this indicates is greater than the intersection value). The values for based on the Information Gain and Expected Reward Gain definitions are given in Table (b)b.

We first consider when . In this case, action has the potential to determine that , which would be a greater gain in information than what could result from choosing , . This is reflected in and having associated Information Gain values of and respectively. In terms of expected reward gain, similarly, since has a high likelihood of receiving the extreme positive reward (), while the most likely outcome from being a reward of , and have expected reward gain values of and respectively.

(a)
Information Gain Expected Reward Gain
(b)
Figure 5. Results from Experiment 1. (a) Stacked bar chart depicting the Information Gain and Expected Reward Gain calculated for in two cases: b= (Blue) and b= (Orange) (b) Values of associated with actions from reward matrix given in Figure 4 based on the given definition of .

We next repeat this calculation again, but with (this would arise if had chosen and observed perform ). After this observation, there is no informational benefit to choosing , since it is already known that is within the range associated with the action. In terms of expected reward gain, there is also little reason to choose , since already knows it can receive the optimal reward () by choosing . Therefore this action receives a significantly reduced value for , as compared to the previous case ( compared with ). Since the knowledge has is sufficient for it to receive it’s reward, no longer provides significant motivation to explore. On the other hand, can still provide a lot of information about the value of , since it is possible to determine if , so its information gain value increases compared to the previous case ( compared with ). Even though this extra information is of no value to , the information gain value cannot capture this, and so is still incentivised to explore. Figure (a)a presents a stacked bar chart reflecting this graphically.

We refer to the diminishing of the motivation to explore as the resulting information ceases to be beneficial as Information Sufficiency; any reward function that motivates exploration should incorporate this sufficiency in order to limit unnecessary exploration. Metrics such as Information Gain motivate pursuing certainty for certainty’s sake, which does not always align with the decision-maker’s motivation, to receive as high a reward as possible. Expected Reward Gain aligns with the decision-maker’s motivations, as it only assigns high values when there is the possibility of achieving a higher reward.

4. Active Altruism Learning for Autonomous Driving

In the previous section we demonstrated how Active and Online learning methods could be used to motivate an interactive agent to choose actions that would reveal information about another agent’s altruistic tendencies, thereby allowing it to pursue a preferred equilibrium. This observation required the assumption that a player’s actions could be observed perfectly, and that actions were executed instantaneously. In this section we will demonstrate how, by relaxing these assumptions, the proposed decision-making system can be used in autonomous driving settings to motivate the execution of information gathering trajectories.

4.1. Experimental Setup

The setting for this experiment is the lane change setting introduced in Figure 1. For demonstrative purposes we use an alternative game matrix to motivate decision-making (Figure 6).

Behind Ahead
Figure 6. Reward matrix associated with the Conflict-free lane change scenario.

At the end of the interaction both agents would prefer to be ahead of the other agent. , achieves this outcome only if they attempt to merge ahead (), and facilitates it. If the chooses to remain ahead (Ahead) in their lane, then they will stay ahead regardless of what does. Thus they always receive the optimal reward for choosing to stay ahead.

While the optimal reward for occurs if they attempt to merge ahead, this is also associated with the highest penalty as, if they attempt to merge ahead and does not facilitate it, (,Ahead), then the player executing the manoeuvre is being reckless, and is punished accordingly. ’s merge behind manoeuvre () is “safe", since will always choose to stay ahead in this case ((,Ahead) strictly dominates (,Behind)). Therefore this is an uninformative action. Finally, ’s “exploratory action" () is designed to return information about the ’s preferences, without actually pursuing the objective. An example of such an action would be turning on an indicator and observing how responds, or nudging into the lane. This returns a positive reward if ’s response indicates that the can merge ahead, and a negative reward if the response indicates otherwise.

As in the previous section and is known by both players. has an unknown altruism coefficient, . , instigating the manoeuvre, is also known to be the leader in the game.

As in geary2021, planning is performed by trajectory optimisation; each cell, (,

), of the game matrix corresponds to a pair of weight vectors, (

,), and the cost function player optimises over, , is defined as; , , where is the number of cost function features. In this experiment the features, , are defined as:

  • ; squared lateral distance from left lane centre.

  • ; squared lateral distance from right lane centre.

  • ; squared difference between velocity and speed limit.

  • ; squared difference between heading and lane heading.


  • ; safety ellipse with major and minor axes and , centred on the position of the vehicle.

  • ; agent receives a positive reward for being ahead of agent , and a negative reward for being behind.

where agent ’s state are the (x,y)-coordinates, linear velocity and heading respectively. and are the vehicles’ width and length. ,, and are parameters whose values were empirically chosen. Each lane is of width metres.

Following the approach used in geary2021, we perform Model Predictive Control (MPC) to generate the trajectories. However, unlike in those experiments, we use the trajectory generation method proposed in sadigh2016; sadigh2018, which generates the trajectory using bi-level optimisation;

(7)

and the Kinematic Bicycle dynamics model, kong2015, is applied to define . This model treats as the leader in planning as well as in decision-making, as it is assumed that the generates their trajectory optimally in response to ’s actions. MPC is performed with timestep seconds, and a lookahead horizon of seconds. Trajectories are recomputed every timestep, and the experiment ends after timesteps. The weight values were set empirically based on the behaviour specified by the corresponding action combination.

Unlike in the previous section, where had perfect observability of ’s actions, in this experiment does not know the true action, , chosen by . , as the follower, can observe ’s action and responds optimally. To account for this uncertainty we use Bayes’ Rule to update, :

(8)

, , where is defined as the set of all disjoint ranges which result from partitioning using the intersection of reward function values (as detailed in Section 2.2). is ’s chosen action, and are from ’s observed trajectory. is the optimal game action for to choose, given . The third line follows from the second as, if it is known that , . The only unknown quantity in the equation is , which we define using the softmax function:

(9)

where is the index of in . As in the previous section we start each experiment with a uniform belief over the value of ; . In the next section we present the results when this approach is applied to the lane merge scenario.

4.2. Experiment Results

From Figure 6 we can determine how the reward matrix partitions the range of possible values for ; we note that has an intersection value of , and . From this we can conclude that . indicates that the (,Behind) equilibrium is infeasible, so should choose . Otherwise should choose . In this section we will run the experiment detailed previously for () and for ()222The magnitude of within the domain does not affect the outcome. Values are chosen for clarity.. In Sections 2.4 and 3 we compared the performance of different definitions of the information gathering reward, . In this section we will compare and evaluate these definitions by observing the resulting behaviour in each of the experiment cases.

4.2.1. :

(a)
(b)
Figure 7. Results for the information gathering trajectory generation experiment for ; (a) Vehicle trajectories; is given by the dotted line. is given by the unbroken line. (b) Relative longitudinal position of with respect to . (Left) Baseline (), (Middle) Information Gain (, (Right) Expected Reward Gain ().

At the lower end of the spectrum of possible altruism values, is considered to be relatively selfish, and will, per the reward matrix in Figure 6, never choose to allow the to merge ahead. As a result, if accurately determines the true value of , they will choose to merge behind. The results of this experiment are presented in Figure 7.

From Figure 7(Left) we observe that, in the baseline case where is only motivated by the immediate expected reward of the action, the agent immediately chooses the “safe" action of merging behind. This provides no information about the value of . As we will see in the next section, the fact that this baseline ultimately chooses the preferable action is purely incidental.

Figure 7(Middle) demonstrates the behaviour when is motivated by information gain. While information gain can motivate exploration of informative actions, it is unclear how to determine the value of the scaling parameter, , other than in an ad-hoc, case-specific basis. In these experiments we let . From the results we observe that this was sufficient to motivate choosing the exploratory action, during which time observes whether is going to move forward, or give way. This provides information about . Once there is sufficient evidence of this fact, the agent chooses to merge behind. This does not explore the possibility that , even though this would allow for it to merge ahead. Therefore, while this method does some exploration, it does not explore all of the possible options.

The final set of results, in Figure 7(Right), depict the case when is motivated by expected reward gain. We observe that the agent explores initially, as in the previous case. Once the agent is sufficiently convinced that , unlike the information gain case, it performs the action, since this is the only way to determine if , which would achieve a higher reward. Once it has been determined that this is not the case, the agent returns to the exploratory action, before converging on the merge behind manoeuvre.

With this result we have demonstrated that, while terms such as information gain can motivate exploratory behaviour to a certain degree, they do not reliably explore all the possible outcomes. Of the tested models, our proposed expected reward gain motivation was the only one to choose actions so as to completely eliminate the possibility of achieving the optimal reward outcome. Next we will demonstrate how a thorough exploration of the possibilities can aid in achieving an optimal outcome.

4.2.2. :

(a)
(b)
Figure 8. Results for the information gathering trajectory generation experiment for ; (Left) Vehicle trajectories; = dotted line. = unbroken line. (Middle) Relative longitudinal position with respect to ; (Right) Belief in value of . (a) Baseline (), (b) Information Gain (, (c) Expected Reward Gain ().

Since , we observe from Figure 6 that, if was known, could achieve their optimal reward by choosing and merging ahead. However, as we see in Figure 8, this outcome is only realised when expected reward gain is used. The baseline case is as in the previous experiment, with immediately defaulting to the “safe" course of action and merging behind.

In the case when information gain is used to motivate the exploration, initially chooses the exploratory action (), which provides some information about the value of ; we observe from Figure (b)b (Middle) that does give way as a result to the exploration action (we observe that, on the plot, is briefly ahead of , indicating that slowed down to give way). Paradoxically, however, as a result of this observation, chooses to merge behind . This can be explained as follows; when initially, , , and . Therefore the explore action is only slightly more preferable than simply merging behind, while the riskier action to attempt and merge ahead is significantly less desirable. After performing the exploration action, and gaining information about , the value of goes down. But this is not commensurate with the increase in , so the overall reward, , falls beneath the value of , and so merging behind becomes preferable, at which point no further information is gained. Since never gains enough confidence that to overcome the risk of being wrong, is never considered as an action.

In the case when is motivated by expected reward gain, spends longer performing the exploration action. Once it has gained sufficient certainty that it determines that it can safely merge ahead of ’s vehicle, achieving the optimal equilibrium.

5. Conflict-aware Active Altruism Learning

In the context of autonomous driving, the bimatrix component of the hierarchical model approximates the common awareness that drivers have about a driving situation. Therefore, each driver must be able to independently construct equivalent game matrices. The previous experiments utilised two assumptions that contradict this requirement; hand crafted game matrix values were used, and the planning agent was presumed to be the leader in the Stackelberg Game. In the open world, if the vehicles are unable to communicate directly, there would be no way to satisfy these requirements. In this section we will propose a method for relaxing these assumptions, so that active-altruism learning can be effectively implemented without relying on known roles or hand-crafted matrix values.

In previous sections the magnitudes of the game matrix values were defined based on the outcome expected by the context, and the behaviour being explored in the experiment. In general, a standard method for defining these values must be established in order to ensure that vehicles can produce equivalent game matrices. Shalev-Schwartz2017 proposes a comprehensive set of criteria for assigning culpability ("responsiblity") in a driving accident resulting from behavioural error. This includes metrics for evaluating responses in merging scenarios. Based on these criteria we propose a trinary reward function based on accident responsibility:

(10)

for . This reward captures the common awareness drivers have about the motivations of other drivers. The degree to which each driver wants a particular outcome to occur is captured by their coefficient. We use these rules to construct Figure (b)b for the lane merge scenario. For example, when (,Behind) is executed, ends up ahead, receiving a reward of , whereas is no worse off than before, so gets a reward of . In the case of (,Ahead), both cars would be to blame for the resulting accident, so both receive . Since these rules do not depend on any communication between vehicles, they can be computed individually by each player.

In our experiments it was assumed that the Leader Follower roles had been established. In practice, without a means of communication, this would not be the case (e.g., if both cars were controlled by our proposed model, both would presume to be the leader). This can result in a breakdown in the ability to coordinate that geary2021 call “Conflict". For example, in Figure (b)b, if believes they are the leader they will choose , expecting to choose Behind. But, if believes they are the Leader they would choose Ahead. Therefore, Conflict should be accounted for in such models.

We can account for Conflict by considering regions of the domain of where Conflict would result in a coordination breakdown; for , this occurs for . In this region should be prepared for the possibility that will behave as if they were the leader, and choose their preferred action. To account for this awareness we define;

(11)

where

is the probability

based on ’s belief at time , and is the action would choose if they were leader.

Figure 9. Conflict-unaware trajectory execution; = dotted line. = unbroken line.

Figure 9 depicts the trajectories produced from the standard definition of , using the matrix values in Figure (b)b, and the expected reward gain motivation to perform decision-making without accounting for Conflict; always chooses , expecting to oblige.333For technical reasons in these experiments ’s executed trajectory presumes is the leader. Their behaviour is not relevant to our discussion. If had assumed they were the leader, then they would have chosen Ahead, and each vehicle would believe the other was behaving sub-optimally. This belief could result in an accident, or a stalemate as each vehicle believes the other is in the wrong. Without communication between the vehicles, there is no clear way to resolve this outcome, so it is preferable that it be avoided entirely.

Figure 10. Conflict-aware trajectory execution; = dotted line. = unbroken line.

Figure 10 presents the trajectories produced under the same settings when Conflict is accounted for by using (Equation 11) in decision-making in the place of . With this awareness, in both cases initially chooses , to test for the value of . In the conflicted case, Figure 10(Left), gives way, due to the risk of an accident due to Conflict. In the conflict-free case knows that Conflict does not change the outcome, and chooses , knowing that their intent will not be misinterpreted and will be facilitated.

These results demonstrate that, using our proposed standard for defining game matrices, and our method for accounting for conflict, an AV is able to effectively utilise active learning based methods to reliably and efficiently infer the value of to a sufficient degree to be able to navigate the lane merge scenario.

6. Conclusion

In this work we have proposed a method for incorporating Active Learning methods into a hierarchical trajectory generation model for autonomous driving applications. We defined “Information Sufficiency", and demonstrated that reward functions that do not account for this are prone to inadequate exploration and sub-optimal behaviour. We proposed a novel reward function, Expected Reward Gain, that was conscious of Information Sufficiency. We showed that, in part due to this awareness, the reward function was better suited to motivate useful exploration. We proposed a standard for constructing game matrices based on accident culpability, as well as a method to account for uncertainty in the roles of leader and follower in the resulting game. Finally we demonstrated how the proposed standard could be applied to the lane merge problem to motivate appropriate and effective inference of the parameter value.

References