Model-based graph reinforcement learning for inductive traffic signal control

Most reinforcement learning methods for adaptive-traffic-signal-control require training from scratch to be applied on any new intersection or after any modification to the road network, traffic distribution, or behavioral constraints experienced during training. Considering 1) the massive amount of experience required to train such methods, and 2) that experience must be gathered by interacting in an exploratory fashion with real road-network-users, such a lack of transferability limits experimentation and applicability. Recent approaches enable learning policies that generalize for unseen road-network topologies and traffic distributions, partially tackling this challenge. However, the literature remains divided between the learning of cyclic (the evolution of connectivity at an intersection must respect a cycle) and acyclic (less constrained) policies, and these transferable methods 1) are only compatible with cyclic constraints and 2) do not enable coordination. We introduce a new model-based method, MuJAM, which, on top of enabling explicit coordination at scale for the first time, pushes generalization further by allowing a generalization to the controllers' constraints. In a zero-shot transfer setting involving both road networks and traffic settings never experienced during training, and in a larger transfer experiment involving the control of 3,971 traffic signal controllers in Manhattan, we show that MuJAM, using both cyclic and acyclic constraints, outperforms domain-specific baselines as well as another transferable approach.

READ FULL TEXT

page 1

page 7

research
03/06/2020

IG-RL: Inductive Graph Reinforcement Learning for Massive-Scale Traffic Signal Control

Scaling adaptive traffic-signal control involves dealing with combinator...
research
06/15/2023

Real-Time Network-Level Traffic Signal Control: An Explicit Multiagent Coordination Method

Efficient traffic signal control (TSC) has been one of the most useful w...
research
10/16/2017

Flow: Architecture and Benchmarking for Reinforcement Learning in Traffic Control

Flow is a new computational framework, built to support a key need trigg...
research
11/20/2022

SafeLight: A Reinforcement Learning Method toward Collision-free Traffic Signal Control

Traffic signal control is safety-critical for our daily life. Roughly on...
research
06/02/2023

Improving the generalizability and robustness of large-scale traffic signal control

A number of deep reinforcement-learning (RL) approaches propose to contr...
research
01/12/2023

Learning to Control and Coordinate Hybrid Traffic Through Robot Vehicles at Complex and Unsignalized Intersections

Intersections are essential road infrastructures for traffic in modern m...
research
04/05/2022

Configuration Path Control

Reinforcement learning methods often produce brittle policies – policies...

Please sign up or login with your details

Forgot password? Click here to reset