Conditional Affordance Learning for Driving in Urban Environments

06/18/2018 ∙ by Axel Sauer, et al. ∙ 0

Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs. A recently proposed third paradigm, direct perception, aims to combine the advantages of both by using a neural network to learn appropriate low-dimensional intermediate representations. However, existing direct perception approaches are restricted to simple highway situations, lacking the ability to navigate intersections, stop at traffic lights or respect speed limits. In this work, we propose a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs. Compared to state-of-the-art reinforcement and conditional imitation learning approaches, we achieve an improvement of up to 68 challenging CARLA simulation benchmark. In addition, our approach is the first to handle traffic lights, speed signs and smooth car-following, resulting in a significant reduction of traffic accidents.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

An autonomous vehicle is a cognitive system and hence follows the concept of a rational agent: in order to operate safely it must accurately observe its environment, make robust decisions and perform actions based on these decisions [1]. Autonomous navigation can be described as a mapping function from sensory input to control output. To implement this function, three major approaches have been proposed: modular pipelines, imitation learning, and direct perception.

Most autonomous vehicles are based on modular pipelines (MP) [4, 5]

. An MP splits the autonomous driving problem into smaller, easier subproblems: perception, path planning, and control. The approach often relies on various sensors in order to produce a consistent representation of the surrounding environment. A driving decision is then made based on this representation. While MPs are relatively interpretable due to their modularity, they rely on complex intermediate representations which are manually chosen (e.g., optical flow), often hard to estimate with sufficient accuracy and might not necessarily be the optimal choice for solving the sensorimotor control task. Furthermore, MPs require large amounts of annotated data which can be costly to obtain, e.g., pixel-wise semantic segmentation for training neural networks or high-definition maps for localization.

The second approach, imitation learning (IL) [18], maps directly from raw input to control output and is learned from data in an end-to-end fashion, skipping the intermediate step of building an explicit environment model. In contrast to MPs, data for training IL-based navigation systems can be collected with relative ease, i.e., by driving around and recording the video footage together with the driven trajectory. However, IL approaches are faced with the problem of learning a very complex mapping (from raw input to control) in a single step. Thus, a model with high capacity as well as a very large amount of training data is required for handling the large variety of real world situations which may be encountered at test time. Besides, IL lacks transparency as it is hard to comprehend the internal decision-making process in a neural network. This raises security concerns: a driving system that does not act transparently might not be trusted or confidently used by humans.

The third approach, direct perception (DP) [2], aims to combine the advantages of MP and IL. Instead of predicting a detailed representation of the environment, the goal in DP is to predict a low-dimensional intermediate representation of the environment which is then used in a conventional control algorithm to maneuver the vehicle. Thus, DP neither requires the network to learn the complex sensorimotor control problem end-to-end, nor does it assume the availability of datasets with pixel-level or box-level labels that are significantly more time-consuming to obtain than image-level labels. Chen et al. [2] demonstrated good results when applying this approach to highway driving using an open-source simulator. However, highway driving is a rather easy task compared to driving in rural or urban areas. When considering navigation in urban areas, several difficulties are added: the agent must obey traffic rules (speed limits, red lights, etc.), take into account possible obstacles on the road (e.g., pedestrians crossing the street), and handle junctions with more than one possible direction.

At the core of the direct perception approach lies the choice of intermediate representation to be predicted. Ideally, this representation should be of low dimensionality while comprising all the necessary information for making a driving decision. One choice of such representation are affordances, attributes of the environment which limit the space of allowed actions. As an example, consider the distance to the vehicle ahead which limits the ability of the ego-vehicle to speed up. Affordances can also be conditioned on high-level driving decisions as provided, e.g., by a conventional navigation system. Consider a vehicle at an intersection: the distance to the centerline changes with the desired turn we would like the vehicle to take (and thus the lane we would like it to follow).

Figure 1: Conditional Affordance Learning (CAL) for Autonomous Urban Driving. The input video and the high-level directional input are fed into a neural network which predicts a set of affordances. These affordances are used by a controller to calculate the control commands.

The goal of this work is to generalize the direct perception approach to the urban setting. We develop novel intermediate representations in the form of affordances suitable for urban navigation as well as conditional models where the decision is not based on the image input alone, but also on high-level navigation commands as provided by a consumer grade navigation system (e.g., “turn left at the next intersection”). Moreover, our system is designed to drive more safely and comfortably, alleviating the common problem of jerky driving behavior in existing approaches for autonomous driving [3]. We coin the developed approach “Conditional Affordance Learning” (CAL) as the predicted affordances and the model output are conditioned on high-level navigation commands. Figure 1 shows an overview of our approach. The main contributions of our work are summarized as follows:

  • [leftmargin=*]

  • We develop novel intermediate representations which are low-dimensional, yet provide the necessary information to allow for driving in urban environments.

  • We deploy a conditional navigation model which allows for taking high-level directional commands prior to intersections. We further implement a control algorithm which allows for a smooth ride while obeying traffic rules.

  • We provide an in-depth evaluation of different network architectures and parameterizations. Our network is able to predict all intermediate representations in a single forward pass.

  • We demonstrate the benefits of video-based representation learning for navigation. Prior works either operate frame-by-frame or do not evaluate in an online setting.

The code and our trained models can be found at https://github.com/xl-sr/CAL.

2 Related Work

Modular pipelines are the most popular approach to autonomous driving and are employed by most research projects [4, 5]. To build the environmental model, the perception stack needs to detect all aspects of the traffic scene that are likely to be relevant for the driving decision. These detection tasks are usually trained and solved separately [6], most recently by exploiting deep neural networks, e.g., for object detection [7, 8], image segmentation [9, 10], or motion estimation [13, 14]. This information can be aggregated into a model of the environment [15, 16] and a planning module generates an obstacle-free trajectory which is executed by the control module [17].

The first implementation of the imitation learning (IL) approach in the real world, ALVINN [18], dates back to the 80s. More recently, Muller et al. [19] employ a similar system to navigate an off-road truck in a variety of terrains, weather, and lighting conditions while avoiding obstacles. Using more computational power and a modern CNN architecture, contemporary approaches such as the one by Bojarski et al. [20] demonstrate impressive performance on real-world tasks like highway following and driving in flat courses. Codevilla et al. [21] propose a conditional imitation learning formulation that is able to traverse intersections based on high-level navigational commands.

The idea of direct perception is to predict several affordance indicators describing the driving scene. The concept of affordances was originally proposed by Gibson [22] in the field of psychology and has been applied to the autonomous driving task by Chen et al. [2], demonstrating strong performance in a racing simulation. Al-Qizwini et al. [23] improve the original approach of Chen et al. [2] by analyzing different CNNs for the mapping from image to indicators. They find GoogLeNet [24] and VGG16 [25] to perform best on this task.

Due to the difficulties of training and testing in real environments, driving simulators are often exploited in related work. Two popular driving simulators for research are the open source car racing game TORCS [2, 23, 26] and the commercial game Grand Theft Auto V (GTA V) [27, 28]. However, TORCS is neither photorealistic nor complex enough, lacking important scene elements such as intersections, pedestrians and oncoming traffic. GTA V can be considered photorealistic, but is closed source and hence limited in its ability to customize and control the environment. In this work, we use the recently released open-source simulator CARLA [29] which provides a trade-off between realism and flexibility, thus addressing some of the problems of previous simulators.

3 Conditional Affordance Learning

Our autonomous driving setup consists of an agent which interacts with the environment as illustrated in Figure 2. A high-level command for the maneuver to be performed (e.g., “go straight”, “turn left”, “turn right”) is provided by a planner, mimicking one of the most common driving scenarios: a human driver (the agent) following the commands of a navigation system (the planner). CARLA uses an A* topological planner to generate these commands. Based on the high-level command and the observation from the environment, the agent calculates the value of throttle, brake and steering wheel. Those values are sent back to the environment model which provides the next observation and high-level command, and the process continues until the goal is reached.

In this work, we focus on visual-based navigation: we only consider observations from a single front-facing camera. Cameras have the advantage that they are ubiquitous and cheap. In addition, they provide information that laser scanners or radar sensors cannot provide, e.g., the color of a traffic light or the type of a speed sign. Thus they are the only sensor that allows to solve the task without any additional sensors. Note that a monocular setup is comparable to how humans solve the driving task. The stereo baseline of the human visual system is insufficient to perceive accurate depth at distances of 50 meters and beyond based on this cue alone.

In the following we will discuss the intermediate representations required for autonomous navigation in urban environments and the submodules of the CAL agent for perception and control.

Figure 2: Overview of our System. The CAL agent (top) receives the current camera image and a directional command (“straight”,“left”,“right”) from CARLA [29]. The feature extractor converts the image into a feature map. The agent stores the last feature maps in memory, where is the length of the input sequence required for the perception stack. This sequence of feature maps, together with the directional commands from the planner, are exploited by the task blocks to predict affordances. Different tasks utilize different temporal receptive fields and temporal dilation factors. The control commands calculated by the controller are sent back to CARLA which updates the environment and provides the next observation and directional command.

3.1 Affordances

A good intermediate representation should be low-dimensional, yet expressive enough to meet the necessary requirements for autonomous navigation: (i) the agent should be able to drive from A to B as fast as possible while obeying traffic rules. (ii) infractions against traffic rules should be avoided at all times. In this work, we consider types of infractions: driving on the wrong lane, driving on the sidewalk, running a red light, colliding with other vehicles, hitting pedestrians and hitting static objects. Moreover, the speed limit must be obeyed at all times. (iii) the agent should be able to provide a pleasant driving experience. The car should stay in the center of the road and take turns smoothly. In addition, it should be able to follow a leading car at a safe distance.

Based on these requirements we propose the affordance set shown in Figure 3 (left). The agent should stop for red traffic lights and change its speed according to the current speed sign. For this purpose we define an observation area in the local coordinates of the agent, as shown in Figure 3 (right). If a traffic light or a speed sign is within , the corresponding affordance indicator switches to True or to the speed limit value. The agent keeps an internal memory of the speed limit. Once a speed sign is passed, the current speed limit is updated and the agent adjusts its speed accordingly. If an obstacle is in front of the car — located within an observation area — the agent should perform a hazard stop — come to a complete stop as quickly as possible. In CARLA, possible road obstacles are cars and pedestrians. For driving in the center of the road, the position and the orientation of the car on the road must be known. The position is defined by the distance to centerline , the lateral distance of the center of the vehicle’s front axle to the closest point on the centerline. The relative angle describes the orientation of the car with respect to the road tangent. To achieve steady vehicle following, the distance to the vehicle ahead must be known. If a car is located in an observation area , we measure as the shortest distance between the bounding boxes of both cars, otherwise set .

Type Conditional Affordances Acronym Range of values discrete No Hazard stop - Red Traffic Light - Speed Sign [km/h] - continuous No Distance to vehicle [m] continuous Yes Relative angle [rad] Distance to centerline [m]
Figure 3: Affordances. Left: We categorize affordances according to their type (discrete/continuous) and whether they are conditional (dependent on directional input) or unconditional. Right: Illustration of the affordances (red) and observation areas used by our model.

3.2 Perception

We formulate perception as a multi-task learning (MTL) problem: using a single neural network we predict all affordances in a single forward pass. This yields a computationally efficient solution – for a forward pass, the system (Network + Controller) takes 50 ms on average on a NVIDIA GTX 1080 Ti GPU. This is well within real-time requirements for autonomous driving systems, where latency should reach levels below 100 ms [30]. Furthermore, MTL has been shown to learn internal representations which are able to improve the network’s generalization abilities [31]. Xu et al. [32] demonstrate this effect in the context of autonomous driving. In our network, all convolutional layers are shared between tasks. The output of the last convolutional layer is used as input for the task-specific branches which we will call “task blocks” in the following.

Neural network architecture. To extract features from raw input images, we use the convolutional layers of a VGG16 network [25]

pretrained on ImageNet

[33]

. The simple structure of the network make it an appealing choice for a feature extractor with real-time requirements. The feature extractor is applied to every image in the input sequence to extract a sequence of feature maps. The task blocks are shallow networks and consist of a task-specific layer, followed by a batch normalization layer

[34] and a dropout layer. For the task-specific layers we experiment with dense layers, LSTMs [35], GRUs [36], and temporal convolutional layers. Each task block has a different temporal receptive field and temporal dilation factors (e.g., using only every other image). The output nodes of the conditional task blocks are divided into three equally sized groups. The directional command

is used to switch between these groups of nodes. Only the active group is used to predict the target label. During training the gradients are only backpropagated through this group. We observed that training specialized submodules for each directional command leads to better performance compared to using the directional command as an additional input to the task networks. This observation agrees with

Codevilla et al. [21]. However, note that in contrast to their Conditional Imitation Learning approach, we use specialized task networks to predict conditional affordances rather than conditional steering commands. Conditioning in the network has several advantages over conditioning in the controller, see the supplementary material for a discussion.

Training. As our approach decouples affordance prediction and control, we are able to train our perception stack independently from the chosen control algorithm. To train the affordance prediction network, we recorded a dataset with the CARLA simulator [29]. CARLA provides two towns: Town 1 for training and Town 2 for testing. For collecting training data in Town 1, we navigate through the city using our controller (described in section 1.2) based on the ground truth affordances. During data collection, the directional input at intersections is chosen at random. All other traffic participants and all traffic lights are controlled by the in-game AI. The full dataset contains approximately 240 thousand images and the corresponding ground truth labels for each of the six affordances. Five percent of the dataset is used for validation. The network is trained on mini-batches of size 32. For optimization we use Adam [37] with a base learning rate of 5e-5. For classification we use the class-weighted categorical cross-entropy where the weights are chosen inversely proportional to the class label occurrence in the training set. For regression, we use the mean average error

. All six affordance loss functions are added to yield the loss function for the whole network:

(1)

Data augmentation is crucial to achieve generalization. Following [29], we used the following augmentations: change in color, contrast, and brightness as well as Gaussian blur and salt-and-pepper noise. Common techniques like vertical and horizontal flipping are not used, because in the driving context vertical flipping would correspond to switching from right- to left-hand traffic and horizontal flipping — to driving upside down. In addition, we employ camera pose randomization. It is well known that for imitation learning utilizing a single fixed camera pose leads to unstable behavior [20, 38] due to control errors and unexpected observations reinforcing each other. Thus, we use a three-camera setup that records the same episode at different baselines ( cm, cm, cm). In addition, each camera is randomly rotated around its vertical axis by up to . This setup is only used for collecting the dataset. During testing, we use the input of a single centered camera.

During 10 % of the training episodes, we additionally set the distance to the preceding vehicle to zero to provoke rear-end collisions with other cars which otherwise would not occur in our training set. This way the network learns to correctly predict these situation as hazard stops.

Hyperparameter search.

The task block hyperparameters entail the

of layer (Dense, LSTM, GRU, Temporal Convolution), the number of within these layers, the dropout ratio , the sequence length , and the dilation value . The architecture of our network is optimized in a two-step procedure. First, we randomly choose and sample values for and from predefined value ranges (see supplementary material for the respective ranges). We initialize the network with the same set of parameters for each task block and train it until the error on the validation set converges. Second, the overall best performing task blocks are combined into a new model. The combined model is then trained jointly until the validation error converges. The hyperparameters of the best performing task blocks are shown in the supplementary material.

3.3 Controller

The control algorithm is split into longitudinal control (throttle, brake) and lateral control (steering) since it is possible to decouple these two tasks [39].

Longitudinal control. The longitudinal controller is subdivided into several states, see the supplementary material for an illustration. The states are (in ascending importance): cruising, following, over_limit, red_light, and hazard_stop. All states are mutually exclusive. For cruising the target speed is the current speed limit. Given the current speed we minimize the error term using a PID controller [43] for the throttle value. The brake is set to zero. The value of also depends on the current road type. If the agent receives a directional input to turn left or right, we reduce by km/h to enable smooth and safe navigation through turns. The state switches to following if another car is less than m ahead. We chose this static threshold, because we found that a dynamic threshold combined with the uncertainty of the network predictions leads to undesired state switching between following and cruising. We will denote the distance to the front car as and the current speed limit as . We use the optimal car-following model of Chen et al. [2] and minimize the error term

(2)

with a second PID controller. If the car is driving faster than the current speed limit, the state switches to over_limit, for a detected red light it switches to red_light, for a detected obstacle it switches to hazard_stop. For over_limit, red_light, and hazard_stop

, the throttle and brake value depend on the vehicle speed, the prediction probability and the urgency to stop. The details as well as a description of the PID tuning procedure are provided in the supplementary material.

Lateral control. For lateral control we use the Stanley Controller (SC) [4]. The SC uses two error metrics: the distance to centerline and the relative angle . The control law to calculate the steering angle at the current vehicle speed is given by

(3)

where is a gain parameter. To calculate the final steering output , we add a damping term which reduces swaying on straight roads. is calculated by using the current output of the control law , the steering angle at the previous time step , and a positive damping constant :

(4)

4 Results

We use the driving benchmark and evaluation protocol of [29] as the basis of our evaluation. The benchmark is composed of four different driving tasks with increasing difficulty: driving straight, driving through a single turn, navigating through the town taking several turns, and navigating through the town with additional dynamic obstacles (cars, pedestrians). The agent is initialized at a pre-defined point in the town and has to reach a goal under a task-specific time limit. The time limit equals the time needed to reach the goal when driving along the optimal path at km/h.

The simulation engine provides two different towns (Town 1, Town 2). Only Town 1 has been seen during training. The evaluation distinguishes between weather conditions that were seen during training and those that were not. For each combination of task, town, and weather condition, different episodes are evaluated. We compare our Conditional Affordance Learning (CAL) approach to Conditional Imitation Learning (CIL) [21]

as well as the Modular Pipeline (MP) and Reinforcement Learning (RL) approach presented in

[29]. As stated in the original paper, the RL baseline cannot be claimed state-of-the-art as it still has room for improvement regarding its implementation. Note that the baselines ignore all red lights and target a cruising speed of km/h. In contrast, our agent stops for red lights and recognizes speed signs. To provide a fair comparison to prior art, we limit our algorithm to a cruising speed of km/h for the quantitative evaluation against the baselines. We use the same model for all tasks and weather conditions without parameter fine-tuning.

4.1 Goal-Directed Navigation

Table 1 shows our quantitative comparison with respect to the state-of-the-art on CARLA on the task of goal-directed navigation. Our agent outperforms the baselines in most of the tasks and under most conditions. As expected, performance decreases for all methods with increasing task difficulty. CAL particularly excels in generalizing to the new town, a scenario where most baselines did not perform well. For the new town under training weather conditions, the performance of the CAL agent is up to % better than the best baseline.

CAL outperforms MP on almost all of the tasks except for the tasks in the training town under new weather conditions where MP performs slightly better. In almost all tasks and conditions, CAL shows better performance compared to CIL. Prediction of the relative angle especially profits from the temporal structure of the CAL network as shown in the supplementary material. Also, the additional parameters of the controller (e.g., damping ) improve driving performance. CAL is outperformed by CIL only on one task: driving straight under the training weather conditions in the test town. Our analysis revealed that this happens due to a systematic failure of the CAL perception stack in one specific weather condition where several trees cast shadows onto the street, affecting the prediction of the distance to centerline such that the agent occasionally drifts off the street.

Training conditions New weather New town
New town and
new weather
Task MP CIL RL CAL MP CIL RL CAL MP CIL RL CAL MP CIL RL CAL
Straight 98 95 89 100 100 98 86 100 92 97 74 93 50 80 68 94
One turn 82 89 34 97 95 90 16 96 61 59 12 82 50 48 20 72
Navigation 80 86 14 92 94 84 2 90 24 40 3 70 47 44 6 68
Nav. dynamic 77 83 7 83 89 82 2 82 24 38 2 64 44 42 4 64
Table 1: Quantitative Evaluation on Goal-Directed Navigation Tasks. We show the percentage of successfully completed episodes per task and compare CAL to a Modular Pipeline (MP) [29], Conditional Imitation Learning (CIL) [21] and Reinforcement Learning (RL) [29].

4.2 Infraction Analysis

Table 2 shows the average distances driven between two infractions during the hardest task of the benchmark: navigation with dynamic obstacles. Under test conditions our approach is able to outperform the baselines in almost every measure. This indicates the strong generalization ability of our approach in comparison to the baselines. CAL performs particularly well in avoiding collisions with other vehicles and pedestrians. In the new environment, our agent performs more than times better than the best baseline (RL). During testing, pedestrians are crossing the street more frequently than in the training scenario. The fact that our agent is still able to perform well demonstrates that the hazard stop network was able to predict this affordance robustly despite its sparsity.

Occasionally, the CAL agent drives on the sidewalk. This infraction occurs primarily during right turns. Since rights turns in CARLA are generally sharper than left turns, the relative motion of pixels between consecutive frames is larger during a right turn. This may pose a bigger challenge to our perception network which is operating on sequential input. Driving on the sidewalk sometimes also results in crashes with static objects, e.g., traffic lights. Note that the infraction count for static collisions is generally higher than that for driving on the sidewalk as the agent sometimes slides along a railing located next to the road, continuously increasing the infraction count.

Training conditions New weather New town
New town and
new weather
Task MP CIL RL CAL MP CIL RL CAL MP CIL RL CAL MP IL RL CAL
Opposite lane 10.2 33.4 0.18 6.7 16.1 57.3 0.09 >60 0.45 1.12 0.23 2.21 0.40 0.78 0.21 2.24
Sidewalk 18.3 12.9 0.75 6.1 24.2 >57 0.72 6.0 0.46 0.76 0.43 0.88 0.43 0.81 0.48 1.34
Collision: Static 10.0 5.38 0.42 2.5 16.1 4.05 0.24 6.0 0.44 0.40 0.23 0.36 0.45 0.28 0.25 0.31
Collision: Car 16.4 3.26 0.58 12.1 20.2 1.86 0.85 >60 0.51 0.59 0.41 2.04 0.47 0.44 0.37 1.68
Collision: Pedestrian 18.9 6.35 17.8 30.3 20.4 11.2 20.6 >60 1.40 1.88 2.55 26.49 1.46 1.41 2.00 6.72
Table 2: Infraction Analysis. We show the average distance (in km) driven between two infractions. ’’ indicates that no infraction occurred over the whole course of testing. Note that the total driven distance depends on the amount and length of the episodes that an agent finishes.

4.3 Attention Analysis

In order to gain more insight into the decision making process of our perception stack we explore gradient-weighted class activation maps (Grad-CAMs) [41] which expose the implicit attention of the CNN with respect to the input image. Figure 4 shows the results for the affordances and . We encode the prediction probability for a particular class with different colors (green means low probability, red means high probability). The attention given to a particular image region is encoded in the intensity channel. The stronger the attention, the more intense the color. Regions with very low attention are shown transparent for clarity. For imitation learning this attention analysis would be much less interpretable since predicting a single control output, e.g. throttle, entails the detection of speed limits, red lights, cars and pedestrians - all in one signal.

Figure 4 (left) shows the agent driving through town 1 with the attention map for overlaid. Note that the network focuses its attention on the road outlines and structures on the roadside where traffic lights are expected to be located. In Figure 4 (middle) the shadow of a pedestrian is visible at the bottom left of the image. Interestingly, the network anticipates an impending and focuses its attention to this region despite the fact that the pedestrian itself is not visible in the image. Finally, in Figure 4 (right) we illustrate a hazard stop situation. The network focuses all its attention on the pedestrian’s lower body. A possible explanation is that the network is able estimate the pedestrian’s walking direction by observing its legs and feet. Note that no pixel-wise supervision has been provided to the network during training. Instead, this behavior has been learned from binary image-level labels () alone.

Figure 4: Attention Analysis. Left: The network focuses at the periphery for recognizing traffic lights. Middle: The network is anticipating a hazard stop by observing the shadow of a pedestrian. Right: The network focuses its attention on the pedestrian to correctly predict a hazard stop.

5 Conclusion

We have proposed Conditional Affordance Learning which combines the advantages of both modular pipelines and end-to-end approaches for navigating complex urban environments using high-level directional commands as input. We tested our approach extensively in simulation and demonstrated significant performance gains with respect to the state-of-the-art on CARLA. In the future, we plan to extend our work in several directions. Training on more diverse environments will likely improve the agent’s performance in all domains. An extended study of network architectures can improve the quality of the predictions and therefore improve the driving capabilities of the agent. We also plan to extend the sensor setup, e.g., using a stereo camera to improve distance prediction. More sophisticated control algorithms like model predictive control may further improve the performance, in particular when training the parameters of the controller jointly with the perception stack. Finally, inspired by recent work [42], we plan to transfer our results from simulation to the real world.

Supplementary Material:
Conditional Affordance Learning
for Driving in Urban Environments


Axel Sauer1,2  Nikolay Savinov1  Andreas Geiger1,3
1 Computer Vision and Geometry Group, ETH Zürich
2 Chair of Robotics Science and System Intelligence, Technical University of Munich
3 Autonomous Vision Group, MPI for Intelligent Systems and University of Tübingen

1 Implementation Details

In this section, we list the value ranges and results of our hyperparameter search. We also provide additional details about our longitudinal control algorithm and the PID tuning procedure.

1.1 Hyperparameter Search

Table 3 shows each hyperparameter and its range of values for the random search described in the main paper. We initialize the network with randomly sampled parameters from the respective ranges. Table 4 shows the parameters of the best performing task blocks after optimization.

Hyperparameter Layer type Number of nodes Dropout amount Sequence length Dilation value
Acronym
Range of values
Dense, GRU, LSTM,
temporal convolution
Table 3: Hyperparameter Search. Value ranges for each hyperparameter.
Task
Red light GRU 185 0.27 14 2
Hazard stop Temp. convolution 160 0.68 6 1
Speed sign Dense 160 0.55 1 1
Vehicle distance GRU 160 0.38 11 1
Relative angle Temp. convolution 100 0.44 10 1
Center distance Temp. convolution 100 0.44 10 1
Table 4: Best task block parameters. We optimized the layer , the number of , the dropout ratio , the sequence length , and the dilation value .

1.2 Controller

Longitudinal Control. The states of the longitudinal controller are illustrated in Figure 5.

Figure 5: States of Longitudinal Controller. The states are ordered in descending importance from top-to-bottom as indicated by the color intensity. All state are mutually exclusive.

The throttle and brake values for the states over_limit, red_light, and hazard_stop are as follows:

  • over_limit: This state is activated, if the agent is driving more than 15 km/h faster than the speed limit . This situation typically occurs when entering a low-speed zone from a high-speed zone. To decelerate quickly, we set the throttle value to zero and calculate the brake value depending on the current speed :

    (5)

    As an example, driving with full speed (90 km/h) into a 30 km/h zone yields , hence applying the brakes almost fully.

  • red_light: If the prediction probability for the class red_light, i.e., the actual softmax output, is higher than a threshold , the controller switches to the state red_light. The throttle is set to zero and the brake is applied with:

    (6)

    We empirically found that a threshold of reduces false positives while still being able to reliable stop in front of red lights. Note that we use a smaller multiplier () compared to the state as red lights typically occur in 30 km/h zones. We consider the current speed to gradually slow down the car in front of red lights.

  • hazard_stop: This state is activated when an obstacle in front of the agent is detected, i.e., when where we empirically determined the threshold as . Note the threshold is lower than that for the state, since preventing crashes with road hazards is more critical for successful goal-directed navigation. When a has been detected, the throttle is set to zero and the brake is set to one.

PID Controller Tuning. The PID controllers used in the and state follow the standard PID control scheme [43]. The overall control function is given as follows

(7)

where , and are the proportional, integral and derivative coefficients. To be able to tune the coefficients of the two PID controllers, we implemented a visualization tool of speed, distance to the centerline, and other important signals. With this direct visual feedback, it is possible to use standard PID tuning methods. In this work, we leverage the method by Ziegler-Nichols [46]. First, all coefficients are set to zero. Then, the proportional gain is increased until the output of the loop starts to oscillate. Given this “ultimate gain” and the oscillation period , we set the coefficients to

(8)
(9)
(10)

Using these values as a starting point, we empirically fine-tune the coefficients for optimal performance, with the goal of enabling fast but smooth reactions to disturbances.

1.3 Conditioning in the Network vs. Conditioning in the Controller

Conditioning in the neural network is advantageous compared to directly conditioning the controllers. Conditioning the controllers would require the prediction of all affordances including the ones which are not relevant for the current navigation task at every point in time.

Consider an intersection where the goal is to go straight. Even though the only relevant signal in terms of distance to the centerline is the distance to the centerline of the straight lane, the distance to the centerlines of the other lanes (left turning, right turning) must also be predicted in order to let the controller select the relevant signal based on the directional command. However, predicting the irrelevant affordances is very difficult as the required image features may not be present (when going straight, the turning lanes leave the field of view). Furthermore, the task is also highly ambiguous (once the intersection has been traversed via the straight lane, the distances to the centerline of the left and right turning lanes are not well defined). Thus, requiring the network to predict all affordances in every situation would introduce noise during training and hence lower the performance of the system. Besides, it would also increase runtime as more signals must be predicted at any point in time.

2 Ground Truth Acquisition

The API of CARLA supplies measurements about the agent (speed, acceleration, location, orientation) and about other objects in the scene (cars, pedestrians, traffic lights, and speed limit signs). These measurements include the current status of the traffic light (green, orange, red) and the type of the speed sign (30, 60, 90). Location and orientation are defined in a world coordinate system . As these measurements do not directly express the affordances we want to learn, we implemented a procedure to convert them into the desired ground truth.

2.1 Observation Area

We define a local coordinate system

at the center of the front axle of the car with the x-axis corresponding to the car’s lateral axis and the z-axis corresponding to the up vector. The agent’s orientation

and the agent’s position is supplied in . Using this information, we convert the position of all other objects to . Next, we define the observation areas as rectangles in the x-y plane of , see Figure 3 of the main paper for an illustration. If an object falls into an observation area it is considered “detected”.

Area Detection of
red_light
speed_sign
(7.4, -0.8) (7.4, -5.8) (14.0, -0.8) (14.0, -5.8)
hazard_stop (0.0, 2.0) (0.0, -2.0) (8.2,2.0) (8.2,-2.0)
distance_to_vehicle (0.0, 1.6) (0.0, -1.6) (50.0, 1.6) (50.0, -1.6)
Table 5: Observation Areas. The observation areas are rectangular boxes. This table lists the coordinates of the vertices of each observation area (provided in in meters.)

The length, width, position, and orientation of the observation areas are chosen based on the respective affordances. Thus, the observation area for and for is located on the right side of the agent as their respective signals are located on the right sidewalk. The observation area for is directly in front of the car and very short, in order to only initiate a hazard stop if an accident could not be avoided otherwise. The observation area for is in front of the car and has a length of 50 m. If another vehicle is located within this area, the distance of the closest vehicle to the agent is measured. If there is no car in front, the default value for (50 m) is used. Table 5 lists the coordinates for each observation area.

2.2 Directional Input

CARLA provides a high-level topological planner based on the A* algorithm. It takes the agents position and the coordinates of the destination and calculates a list of commands. This “plan” advises the agent to turn left, right or to keep straight at intersections.

3 Additional Experiments

In this section, we provide additional experiments. First, we compare temporal to non-temporal task blocks to assess the influence of the additional temporal information provided by video data. Second, we provide a qualitative evaluation of our agent’s driving behavior.

3.1 Comparison of Temporal and Non-temporal Task Blocks

Table 6 shows the best performing task blocks for each task. By using a temporal task block, all the classification and regression results improve. This demonstrates that each task profits from the additional temporal information.

Best performing task blocks
Task Metric non-temporal temporal relative Change
Hazard stop 84.96 % 87.41 % + 2.88 %
Speed sign 91.95 % 92.71 % + 0.83 %
Red light 92.41 % 93.95 % + 1.67 %
Relative angle MAE 0.00797 0.00207 - 74.03 %
Center distance MAE 0.09642 0.08465 - 12.21 %
Vehicle distance MAE 0.04497 0.03289 - 26.86 %
Table 6: Comparison of Temporal and Non-temporal Task Blocks. The last column shows the relative change in performance. The higher the and the lower the the better.

The biggest relative improvement can be seen for the relative_angle task block. The error of the temporal task block is almost four times lower than the non-temporal task block. This suggests that this task profits more from the temporal context than other tasks. The smallest improvement is achieved for the speed_sign task. To keep computation time low during training and inference, we therefore use the non-temporal task block for this task in our final model.

In addition, we empirically observed that there is no dominating temporal layer in terms of performance. LSTMs, GRUs and temporal convolution layers perform very similar.

3.2 Driving Behaviour

The experiments in the main text examined whether the goal was reached and if there were any rule violations. This section focuses on qualitative driving experience, i.e., how the driving would be perceived by a passenger. The evaluation is done for the task “navigation without dynamic objects” to evaluate the general driving behavior without distorting the results by the challenges of stopping for cars or pedestrians.

We use the following metrics for evaluation:

  • Centerline distance:

    Staying in the middle of the road is the main task of every lane keeping system. We evaluate the ability of the algorithm to minimize the deviation from the centerline. The reported result is the median over all episodes. The median is more descriptive for the qualitative driving experience than the mean value since failed episodes during which an agent drifts off the road produce large outliers.

  • Jerk is the rate of change of acceleration. The jerk can be felt during driving in the form of a jolt or sudden shock. It is commonly used to quantify ride comfort [45]. A smoother ride results in lower fuel consumption increased passenger comfort and more safety. A way to quantify jerk is to compare the root mean square (RMS) jerk [44]. The analysis further distinguishes between longitudinal and lateral jerk with lateral jerk separately evaluated for straight roads and in turns.

Table 7 reports our results. In contrast to the previous evaluations, the results are not reported depending on the weathers conditions or the environments. The results are very similar under all conditions for all agents. The CAL agent is able to achieve the best performance on all four metrics.

Distance to Centerline

All agents perform on a similar level and are able to keep the distance to the centerline low. The exception is the RL approach. When driving on a straight road, the RL agent regularly starts swaying from side to side over the entire lane, resulting in the high value.

Longitudinal Jerk

The CAL agent performs best, followed by CIL and RL. The control parameters of the CAL agent are freely adjustable which allows to accelerate and decelerate smoothly as well as driving at a constant pace. The RL agent is only able to set the throttle value to either 0 or 1. This results in a sudden jerk every time the agent utilizes the throttle.

Lateral Jerk while Driving Straight

On straight roads, both the CAL and the CIL agent perform similarly. When driving straight, the RL agent often outputs a steering value of 0. This leads to the agent drifting off the road. When correcting against the drift, the RL agent steers abruptly, resulting in large jerk values.

Lateral Jerk while Turning

The CAL agent performs exceptionally well. There are several reasons for this. First, the agent is slowing down when approaching a turn. Second, our agent turns smoothly without abrupt changes in steering. Third, the jerk peaks are generally lower than for the other approaches. Despite this good performance, the transition from turns to straight roads leaves room for improvement. Changes in the directional switch result in sudden jumps in the prediction of the relative angle in some cases, resulting in slight short-timed jerk. The CIL agent is not as good as the CAL agent, but it is generally able to drive through turns smoothly. RL, in contrast, conducts strong and abrupt steering movements, resulting in a higher jerk value.

Metrics Unit CIL RL CAL
Distance to centerline 0.390 0.755 0.334
Longitudinal jerk 0.449 1.368 0.333
Lateral jerk driving straight 0.084 0.336 0.052
Lateral jerk driving turns 0.242 0.548 0.065
Table 7: Driving Behaviour. Qualitative evaluation of the general driving performance, the lower a metric the better.

We would like to thank Vanessa Diedrichs for many helpful discussions about autonomous driving and data visualization and for her general support.

References

  • Russell and Norvig [2016] S. J. Russell and P. Norvig. Artificial intelligence: a modern approach. 2016.
  • Chen et al. [2015] C. Chen, A. Seff, A. Kornhauser, and J. Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In ICCV, 2015.
  • Metz [2018] R. Metz. And the award for most nauseating self-driving car goes to … MIT Technology Review, Jan 2018.
  • Thrun et al. [2006] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, and G. Hoffmann. Stanley: The robot that won the DARPA grand challenge. Journal of Field Robotics, 23(9):661–692, 2006.
  • Ziegler et al. [2014] J. Ziegler, P. Bender, M. Schreiber, H. Lategahn, T. Strauss, C. Stiller, T. Dang, U. Franke, N. Appenrodt, and C. G. Keller. Making Bertha drive — An autonomous journey on a historic route. IEEE Intelligent Transportation Systems Magazine, 6(2):8–20, 2014.
  • Geiger et al. [2013] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research, 32(11):1231–1237, 2013.
  • Redmon et al. [2016] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In CVPR, 2016.
  • Ren et al. [2015] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015.
  • He et al. [2017] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask R-CNN. In ICCV, 2017.
  • Li et al. [2017] Y. Li, H. Qi, J. Dai, X. Ji, and Y. Wei. Fully convolutional instance-aware semantic segmentation. In CVPR, 2017.
  • Kendall et al. [2017] A. Kendall, H. Martirosyan, S. Dasgupta, and P. Henry. End-to-end learning of geometry and context for deep stereo regression. In ICCV, 2017.
  • Žbontar and LeCun [2016] J. Žbontar and Y. LeCun.

    Stereo matching by training a convolutional neural network to compare image patches.

    Journal of Machine Learning Research (JMLR)

    , 17(65):1–32, 2016.
  • Sun et al. [2017] D. Sun, X. Yang, M.-Y. Liu, and J. Kautz. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. arXiv.org, 2017.
  • Güney and Geiger [2016] F. Güney and A. Geiger. Deep discrete flow. In ACCV, 2016.
  • Zhang et al. [2013] H. Zhang, A. Geiger, and R. Urtasun. Understanding high-level semantics by modeling traffic patterns. In ICCV, 2013.
  • Geiger et al. [2014] A. Geiger, M. Lauer, C. Wojek, C. Stiller, and R. Urtasun.

    3D Traffic Scene Understanding from Movable Platforms.

    IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5):1012–1025, 2014.
  • Schwarting et al. [2018] W. Schwarting, J. Alonso-Mora, and D. Rus. Planning and decision-making for autonomous vehicles. Annual Review of Control, Robotics, and Autonomous Systems, 1(1):187–210, 2018.
  • Pomerleau [1989] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In NIPS, 1989.
  • Muller et al. [2006] U. Muller, J. Ben, E. Cosatto, B. Flepp, and Y. L. Cun. Off-road obstacle avoidance through end-to-end learning. In NIPS, 2006.
  • Bojarski et al. [2016] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang. End to end learning for self-driving cars. arXiv.org, 2016.
  • Codevilla et al. [2018] F. Codevilla, M. Müller, A. Dosovitskiy, A. Lopez, and V. Koltun. End-to-end driving via conditional imitation learning. In ICRA, 2018.
  • Gibson [1966] J. J. Gibson. The senses considered as perceptual systems. 1966.
  • Al-Qizwini et al. [2017] M. Al-Qizwini, I. Barjasteh, H. Al-Qassab, and H. Radha. Deep learning algorithm for autonomous driving using GoogLeNet. In Intelligent Vehicles Symposium, 2017.
  • Szegedy et al. [2015] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
  • Simonyan and Zisserman [2014] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv.org, 2014.
  • Huang et al. [2015] J. Huang, I. Tanev, and K. Shimohara. Evolving a general electronic stability program for car simulated in TORCS. In CIG, 2015.
  • Richter et al. [2016] S. R. Richter, V. Vineet, S. Roth, and V. Koltun. Playing for data: Ground truth from computer games. In ECCV, 2016.
  • Ebrahimi et al. [2017] S. Ebrahimi, A. Rohrbach, and T. Darrell. Gradient-free policy architecture search and adaptation. In CoRL, 2017.
  • Dosovitskiy et al. [2017] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun. CARLA: An open urban driving simulator. In CoRL, 2017.
  • Lin [Shih-Chieh and Zhang] S. Lin, Y. Zhang, C. Hsu, M. Skach, M. Haque, L. Tang, J. Mars. The Architectural Implications of Autonomous Driving: Constraints and Acceleration. In ASPLOS, 2018.
  • Caruana [1993] R. Caruana. Multitask learning: A knowledge-based source of inductive bias. In ICML, 1993.
  • Xu et al. [2017] H. Xu, Y. Gao, F. Yu, and T. Darrell. End-to-end learning of driving models from large-scale video datasets. In CVPR, 2017.
  • Deng et al. [2009] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009.
  • Ioffe and Szegedy [2015] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv.org, 2015.
  • Hochreiter and Schmidhuber [1997] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  • Cho et al. [2014] K. Cho, B. van Merrienboer, Ç. Gülçehre, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv.org, 2014.
  • Kingma and Ba [2014] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv.org, 2014.
  • Müller et al. [2018] M. Müller, V. Casser, J. Lahoud, N. Smith, and B. Ghanem. Sim4cv: A photo-realistic simulator for computer vision applications. International Journal of Computer Vision, 2018.
  • Kosecka et al. [1998] J. Kosecka, R. Blasi, C. J. Taylor, and J. Malik. A comparative study of vision-based lateral control strategies for autonomous highway driving. In ICRA, 1998.
  • Ang et al. [2005] K. H. Ang, G. Chong, and Y. Li. Pid control system analysis, design, and technology. IEEE transactions on control systems technology, 13(4):559–576, 2005.
  • Selvaraju et al. [2016] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. arXiv.org, 2016.
  • Müller et al. [2018] M. Müller, A. Dosovitskiy, B. Ghanem, and V. Koltun. Driving policy transfer via modularity and abstraction. arXiv.org, 2018.
  • [43] Kiam Heong Ang, Gregory Chong, and Yun Li. Pid control system analysis, design, and technology. IEEE transactions on control systems technology, 13(4):559–576, 2005.
  • [44] D Hrovat and M Hubbard. Optimum vehicle suspensions minimizing rms rattlespace, sprung-mass acceleration and jerk. Journal of Dynamic Systems, Measurement, and Control, 103(3):228–236, 1981.
  • [45] Quanan Huang and Huiyi Wang. Fundamental study of jerk: evaluation of shift quality and ride comfort. Technical report, SAE Technical Paper, 2004.
  • [46] John G Ziegler and Nathaniel B Nichols. Optimum settings for automatic controllers. trans. ASME, 64(11), 1942.