Transfer Learning versus Multi-agent Learning regarding Distributed Decision-Making in Highway Traffic

10/19/2018 ∙ by Mark Schutera, et al. ∙ KIT University of Freiburg 0

Transportation and traffic are currently undergoing a rapid increase in terms of both scale and complexity. At the same time, an increasing share of traffic participants are being transformed into agents driven or supported by artificial intelligence resulting in mixed-intelligence traffic. This work explores the implications of distributed decision-making in mixed-intelligence traffic. The investigations are carried out on the basis of an online-simulated highway scenario, namely the MIT DeepTraffic simulation. In the first step traffic agents are trained by means of a deep reinforcement learning approach, being deployed inside an elitist evolutionary algorithm for hyperparameter search. The resulting architectures and training parameters are then utilized in order to either train a single autonomous traffic agent and transfer the learned weights onto a multi-agent scenario or else to conduct multi-agent learning directly. Both learning strategies are evaluated on different ratios of mixed-intelligence traffic. The strategies are assessed according to the average speed of all agents driven by artificial intelligence. Traffic patterns that provoke a reduction in traffic flow are analyzed with respect to the different strategies.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The level of automation in traffic and transportation is increasing rapidly, especially in the context of highway scenarios, where complexity is reduced in comparison to urban street scenarios. Traffic congestion is annoying, stressful, and time-consuming. Progress in the area of autonomous driving thus offers the opportunity to: Improve this condition, enhance traffic flow, and yield corresponding benefits such as reduced energy consumption [Winner et al.2015]

. At the same time, autonomous systems are distributed by a number of different manufacturers and suppliers. This leads to the challenge of the interaction between different autonomous systems and human-operated vehicles. Therefore, it seems to be within the realm of possibility that increased automation in traffic may compromise the average flow of mixed intelligence traffic. As highway traffic can be described in terms of a multi-agent system with independent agents cooperating and competing to achieve an objective the key to high-performance highway traffic flow might lie within multi-agent learning and thus within the understanding and exploration of distributed decision-making and its strategies. Transfer learning is used with increasing frequency within deep learning and might prove able to adapt artificial neural networks to bordering tasks 

[Prodanova et al.2018]. Within the automotive industry, the pros and cons of each such strategy are still subject to ongoing discussions. This work contributes to this discussion by investigating the performance of transfer learning, as opposed to multi-agent learning, regarding distributed decision-making in highway traffic. For the experiments, agents are trained with different learning strategies and deploy them to the DeepTraffic micro-traffic simulation, which was introduced along with the MIT 6.S094: Deep Learning for Self-Driving Cars course [Fridman et al.2018]. The aim of this study is to examine the impact on mixed intelligence traffic in the form it’s expected to take with the adoption of Level 5 autonomous driving. To this end the subsequent steps are taken:

  • Traffic agents are trained within a micro-traffic simulation, through deep reinforcement learning.

  • An evolutionary algorithm is designed to embed the traffic agents’ learning procedure.

  • A single traffic agent’s model is applied to multiple agents (transfer learning strategy).

  • Multiple traffic agents are jointly trained (multi-agent learning strategy).

  • The two learning strategies are evaluated by means of speed and traffic flow patterns.

2 Micro-Traffic Simulation Environment

In the DeepTraffic111https://selfdrivingcars.mit.edu/deeptraffic/ challenge, the task is to train a car agent with the goal of achieving the highest average speed over a period of time. In order to succeed, the agent has to choose the optimal action at each step in time given the state . Possible actions are: accelerate, decelerate, goLeft (lane change to the left), goRight (lane change to the right) or noAction (remain in the current lane at the same speed). The agent’s observed state at time step is defined as the number of grid cells surrounding the agent. The size of the slice is adjustable via three different parameters: lanesSide, representing the width of the slice; patchesAhead, denoting the length of the slice in the forward direction; and patchesBehind, representing the length of the slice in the backward direction. Depending on the parameter temporal_window, , the state can be transformed into a sequence . If , then . Cell values denote the maximum speed the agent can achieve when it is inside the cell. The maximum speed in an empty cell is set to mph. A cell occupied by a car maintains the speed of the car.

There are a total of cars inside the environment, for which the intelligent control of up to cars is allowed. The remaining cars choose their actions randomly. Central to the intelligent control is a neural network. It receives the observed state as input and returns an action , therefore functioning as the agent’s behavior. The implemented algorithm is a JavaScript implementation of the famous DQN [Mnih et al.2013] algorithm. Please refer to section 3.1 for more information.

The environment allows for the adjustment of a whole set of hyperparameters in order to push the agents’ performance. Table 1 lists the most important hyperparameters, which have proven to have a significant influence on the agents’ performance [Fridman et al.2018]. These hyperparameters, as well as the network architecture itself, can be directly adjusted within the browser. To automate the configuration, training, and validation process for the experiments a Python-based helper robot using the Selenium222http://selenium-python.readthedocs.io/ package was implemented.

3 Training Advanced Traffic Agents

3.1 Deep Reinforcement Learning and the Deep Q-Network (DQN)

Deep reinforcement learning (DRL) is the combination of two general-purpose frameworks: reinforcement learning (RL) for decision-making, and deep learning (DL) for representation learning [Silver2016].

In the RL framework, an agent’s task is to learn actions within an initially unknown environment. The learning follows a trial-and-error strategy based on rewards or punishments. The agent’s goal is to select actions that maximize the cumulative future reward over a period of time. In the DL framework, an algorithm learns a representation from raw input that is required to achieve a given objective. The combined DRL approach enables agents to engage in more human like learning whereby they construct and acquire their knowledge directly from raw inputs, such as vision, without any hand-engineered features or domain heuristics. This new generation of algorithms has recently achieved human like results in mastering complex tasks with a very large state space and with no prior knowledge

[Mnih et al.2013, Mnih et al.2015, Silver et al.2017].

The simulation environment, per default, implements a DQN algorithm introduced in [Mnih et al.2013, Mnih et al.2015] for training the advanced traffic agents. As a variant of the popular Q-learning [Watkins and Dayan1992] algorithm, DQN uses a neural network to approximate the optimal state-action value function (i.e. -function). To make this work, DQN utilizes four core concepts: experience replay [Lin1993], a fixed target network, reward clipping, and frame skipping [Mnih et al.2015].

The resulting approximate state-action value function is parametrized through , in which are the parameters (i.e weights) of the Q-network at iteration [Mnih et al.2015]. To train the Q-network at iteration

, one has to minimize the following loss function:

(1)

in which represents samples of experiences, drawn uniformly at random from the experience replay memory (experience replay), is the target for iteration , is the discount factor determining the agent’s horizon, are the parameters of the Q-network at iteration and are the network parameters used to compute the target at iteration i, which updates every steps and hold fix otherwise (fixed target network) [Mnih et al.2015]. Algorithm 1 outlines the full pseudo-code algorithm.

Initialize replay memory to capacity
Initialize action-value function with random weights
for episode  do
     Observe initial state
     for  do

         With probability

select a random action
         otherwise select
         Execute action
         Observe reward and state
         Store experience in
         Sample random minibatch of transitions from
         Set
         Train the Q network using as loss
     end for
end for
Algorithm 1 Deep Q-learning with Experience Replay

3.2 Extended Hyperparameter Search

Within deep reinforcement learning there arises the need for a structured approach to determine suitable hyperparameter configurations . This is important for both the neural network’s architecture and the training process. The following approach fulfills this requirement over multiple search iterations. The micro-traffic simulation has already been used to conduct a large-scale, crowd-sourced hyperparameter search [Fridman et al.2018]. In a first step, the proposals drawn from this hyperparameter search are utilized in order to define the intervals of the hyperparameters (see Tab. 1). Building on the hyperparameter bounds, a -fold random search is performed, as proposed by [Bergstra and Bengio2012, Goodfellow et al.2016].

Hyperparameter Interval Configuration
lanesSide 3
patchesAhead 30
patchesBehind 13
trainIterations () 100k
temporal window 0

num neurons

21
learning rate () 0.00017
momentum ) 0.57
batch size 53
l2 decay 0.01
experience size () 5000
start learn threshold 500
gamma () 0.9
learning steps total 54129
learning steps burnin 1083
epsilon min 0.86
epsilon test time 0.22
number of layers 7
Table 1: Itemization of the hyperparameter search space. Lower and upper bounds of the hyperparameter configuration . Configuration depicts the optimal hyperparameter configuration found with the presented hyperparameter search.

Subsequently, the five best performing networks, which are generated by the random search were utilized to initialize an elitist evolutionary algorithm. The hyperparameter search for artificial neural networks is inhibited by the comparatively long training time for each hyperparameter configuration. Therefore, an elitist fast converging evolutionary algorithm was deployed to automate the process further. The whole hyperparameter search process reduces the effects of bad agent configuration, rendering the effects of transfer learning and multi-agent approaches more visible and reproducible. In the future, we would also like to exploit the hyperparameter tuning capabilities of evolutionary algorithms to create highly optimized agents [Salimans et al.2017, Such et al.2017, Conti et al.2017].

4 Learning Strategies for Systems Based on Distributed Decision-Making

4.1 Transfer Learning

Throughout the transfer learning strategy, a first core neural network ANN is trained. The network is trained with a single agent deployed within the micro-traffic simulation. The training is iterated while training and evaluating different hyperparameter configurations (for hyperparameter configuration see Tab. 1). Subsequently, the learned model is repurposed for a multi-agent system. The decision-making process is distributed over independent, multiple agents. The transfer learning approach presented here, is based on parameter sharing among multiple agents while the agents maintain their ability to carry out self-determined actions. To that end, the previously learned weights of ANN are transfered onto a second, third, and so on agent , as described by [Olivas et al.2009] and displayed in Fig. 1.

Figure 1: Two screenshots from the micro-traffic simulation. The highlighted cells depict the catchment areas of the safety system, which automatically slows down the car to prevent collisions. The vehicles with the logo represent trainable agents, while those without a logo are not trainable and exhibit random behavior. The left figure shows the training process of a core network ANN, whereas on the right figure illustrates the pretrained core network being deployed among multiple agents.

4.2 Multi-agent Learning

Within the multi-agent learning strategy, the agents are trained simultaneously without being aware of each other. More precisely, they have to interact with each other without the possibility to communicate among themselves. This makes joint planning impossible. The resulting network ANN is trained with the joint objective of achieving the average speed for all agents, but as in the transfer learning scenario, actions are taken individually and in a greedy way. The neural network’s ANN parameters are distributed and shared across all agents (see Fig. 2). In contrast to the transfer learning approach, the multi-agent strategy enables the agents to learn to interact directly with other agents in order to increase the reward [Tuyls and Weiss2012].

Figure 2: The network ANN is shared across multiple agents and trained with respect to a reward function that depends on the outcome of all agents’ actions and their influence on the average speed .

4.3 Traffic Pattern Annotation

In order to summarize and to make traffic flow analyzable an annotation for traffic patterns is introduced. As traffic flow can be defined as the absence of traffic congestions, the proposed traffic pattern annotation is based on analyzing congestion patterns (see Fig. 3).

Figure 3: Two screenshots from the micro-traffic simulation. The highlighted cells depict the catchment areas of the safety system, which automatically slows down the car to prevent collisions. The left figure shows the vehicle in a state of free passage, whereas in the right figure, the left lane and area directly in front of the vehicle are blocked.

The congestion pattern (see Fig. 3

) is cast into a feature vector annotation

cp

cp comprises a boolean stating whether the car is blocked to the front and/or sides () or whether one of the lanes – left lane, front lane, or right lane – is passable () due to the safety regulations within the safety catchment area. Furthermore, the feature vector annotation takes into account the speed at which the agent drove into the congestion as well as the loss in speed or deceleration the agent’s vehicle experiences within half a second in simulated time after encountering the congestion. The feature reflects whether the agent was compromised by another intelligent agent and thus assesses the amount of cooperation during evaluation. The number of congestion throughout the evaluation runs is taken into account as .

5 Experiments

The first experiments focus on a hyperparameter search as described in section 3.2. The hyperparameter configuration for the elitist evolutionary algorithm are as follows: A small population size, , and a directed population initialization by means of a random search keeping the best parent during transition into the next generation. The crossover rate is set to and the mutation rate to . This approach significantly favors exploitation over the exploration of the hyperparameter space. Hence, the approach converges in short time while exhibiting the disadvantage of reduced exploration of the hyperparameter space.

In order to compare the transfer learning strategy to the multi-agent strategy (see Fig. 5), the neural network architecture and training parameters (see Tab. 1) discovered by the hyperparameter configuration search are further utilized. Each strategy is applied to different numbers of trainable agents, ranging from up to agents. Each arrangement is evaluated -fold to meet expected deviations due to differing evaluation data. However, this is found to pose only a minor issue as the minimal and maximal validation performance for each arrangement spans less than mph in all arrangements.

5.1 Results

In the quest to find a high-performance hyperparameter configuration, the -fold random search makes a start by evaluating in the micro-traffic simulation configurations reaching a maximum average speed of  mph (see search iteration in Fig. 4). The average speed is used as an indicator for traffic flow. The best five configurations are selected to initialize the evolutionary algorithm which leaves the configurations with the maximum at  mph, the minimum at  mph, and the mean at  mph. The evolutionary algorithm is deployed over six generations (see search iterations - in Fig. 4). As discussed in Section 3.2, the evolutionary algorithm is elitist with a focus on exploitation and enabling an extended exploration. However, the influence of exploration is observed in search iteration , while the stronger exploitation is evident in search iteration , where the range of values is again decreased. After completion of the evolutionary algorithm, the configurations have a maximum of  mph, a minimum of  mph and a mean of  mph. This shows an increase of  mph without any user interaction apart from choosing educated upper and lower bounds for the hyperparameter search space.

Figure 4: Development of the configurations’ performance in [mph] over the search iterations. Iteration depicts the random search approach, while iteration shows the sampling of the top five configurations from the random search. Iterations - represent the generations of the evolutionary algorithm. The upper bound of the graph is the maximum performance, the lower bound is the minimum performance, and the black line represents the mean performance in the respective search iteration.
Figure 5: Evaluation of the key performance index [mph] for the various numbers of agents deployed in the micro-traffic simulation. The transfer learning strategy is shown in dark gray, while the multi-agent learning strategy is shown in light gray.

Both strategies experience a drop in performance when applied to multi-agent scenarios (see Fig. 5). As for the initial addition of supplementary agents the performance downturn is likely due to the fact that the network architecture and training hyperparameters have been optimized according to a desirable single agent performance which then faces a different scenario during the reconditioned evaluation. Notwithstanding an overall increase in performance, associated with an increase in the number of agents can be recognized. The slopes of the regression curves are: mph per agent added for the transfer learning strategy and mph per agent added for the multi-agent training strategy (compare with Fig. 5). The multi-agent strategy, having the edge over the transfer learning strategy, is able to profit from training in a multi-agent scenario. By contrast, agents in the transfer learning strategy never had the opportunity to learn how to react to and interact with other trained agents.

Figure 6: Representation of the amount of congestions (incidences) for the various numbers of agents deployed in the micro-traffic simulation. On top, the transfer learning strategy and at the bottom the multi-agent learning strategy is shown. The gray area is the absolute amount of congestions detected during evaluation. The light gray line depicts the amount of congestions with participation of another trained agent. The dark gray line depicts the amount of congestions in which the ego-vehicle is fully blocked on all lanes.

Further insight is gained by analyzing the traffic congestion feature vectors (see Section 4.3 and Fig. 6). What strikes one the most is the counterintuitive finding that the number of congestions (gray area) increases with the amount of trained agents deployed in the micro-traffic simulation and as the average evaluation speed increases. Simultaneously, the number of congestions in which the car is held in full enclosure (dark gray line) remains constant, fluctuating around incidences.

Figure 7: Representation of the number of congestions divided into mph bins, which outline the magnitude of deceleration. Light gray bars are associated with the transfer learning with a single agent, whereas the dark gray bars are associated with the transfer learning with eleven agents.

This is only seemingly contradictory, as the largest part of the increased number of congestion incidences may be attributed to low decelerations - (see Fig. 7). In terms of traffic flow, this means that the trained agents are able to anticipate and withdraw from potentially congestive positions in advance or else dissolve a formation conducive to congestion. Thus, the trained agents are able to accelerate again shortly after driving into an area of congestion which leads to better performance.

6 Conclusion

The influence of transfer learning and multi-agent learning in the presence of multiple trainable agents, has been investigated with respect to distributed decision-making in order to increase simulated highway traffic flow. Both strategies were implemented and evaluated in the micro-traffic simulation environment. Since the micro-traffic simulation only allows for multi-agent learning the newly conducted strategy-comparison and deployment of the transfer learning strategy as well as the evaluation tooling allows for extended testing and evaluation.

It was demonstrated, that transfer learning strategies are applicable within the utilized micro-traffic simulation. A beneficial effect of such strategies correlating with the amount of trainable agents deployed in mixed-intelligence traffic has been shown. It was found that the transfer learning strategy and the multi-agent strategy were reaching approximately the same level of performance, while also displaying similar characteristics. Concentrating on traffic patterns, it became evident that the number of congestions an agent experiences not necessarily contingent on the average speed. More important are the magnitude of deceleration required of the agent and the time needed to withdraw from a congested situation. The micro-traffic scenario is a vast simplification of real traffic. Our findings suggest that multi-agent learning has an edge with respect to performance in scenarios with more intelligent agents involved. This leads to the assumption that with a growing number of intelligent agents taking to the roads, multi-agent learning strategies will be inevitable.

Further comparisons between the investigated multi-agent strategies might reveal explicit distinctions. To this end, investigating ratios with a higher share of trainable agents is advisable. Moreover, the multi-agent strategies should benefit from a network architecture and training design that is tailored with respect to the multi-agent scenario (as opposed to the single-agent scenario). Increasing the amount of training iterations and deepening the hyperparameter search is recommended.

Acknowledgments

We thank Dr. Jochen Abhau and Dr. Stefan Elser from Research and Development, as well as the whole Data Science Team at ZF Friedrichshafen AG, for supporting this research. Thank you for all the assistance and comments that greatly improved this work. We would also like to express our gratitude to Prof. Dr. Ralf Mikut from the Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, who provided insight and expertise that greatly enhanced this and other research.

References