Estimating Link Flows in Road Networks with Synthetic Trajectory Data Generation: Reinforcement Learning-based Approaches

06/26/2022
by   Miner Zhong, et al.
0

This paper addresses the problem of estimating link flows in a road network by combining limited traffic volume and vehicle trajectory data. While traffic volume data from loop detectors have been the common data source for link flow estimation, the detectors only cover a subset of links. Vehicle trajectory data collected from vehicle tracking sensors are also incorporated these days. However, trajectory data are often sparse in that the observed trajectories only represent a small subset of the whole population, where the exact sampling rate is unknown and may vary over space and time. This study proposes a novel generative modelling framework, where we formulate the link-to-link movements of a vehicle as a sequential decision-making problem using the Markov Decision Process framework and train an agent to make sequential decisions to generate realistic synthetic vehicle trajectories. We use Reinforcement Learning (RL)-based methods to find the best behaviour of the agent, based on which synthetic population vehicle trajectories can be generated to estimate link flows across the whole network. To ensure the generated population vehicle trajectories are consistent with the observed traffic volume and trajectory data, two methods based on Inverse Reinforcement Learning and Constrained Reinforcement Learning are proposed. The proposed generative modelling framework solved by either of these RL-based methods is validated by solving the link flow estimation problem in a real road network. Additionally, we perform comprehensive experiments to compare the performance with two existing methods. The results show that the proposed framework has higher estimation accuracy and robustness under realistic scenarios where certain behavioural assumptions about drivers are not met or the network coverage and penetration rate of trajectory data are low.

READ FULL TEXT

page 23

page 26

page 31

research
07/28/2020

TrajGAIL: Generating Urban Vehicle Trajectories using Generative Adversarial Imitation Learning

Recently, there are an abundant amount of urban vehicle trajectory data ...
research
05/25/2021

Trajectory Modeling via Random Utility Inverse Reinforcement Learning

We consider the problem of modeling trajectories of drivers in a road ne...
research
09/16/2019

Off-road Autonomous Vehicles Traversability Analysis and Trajectory Planning Based on Deep Inverse Reinforcement Learning

Terrain traversability analysis is a fundamental issue to achieve the au...
research
01/20/2023

A Big-Data Driven Framework to Estimating Vehicle Volume based on Mobile Device Location Data

Vehicle volume serves as a critical metric and the fundamental basis for...
research
09/15/2019

Driving in Dense Traffic with Model-Free Reinforcement Learning

Traditional planning and control methods could fail to find a feasible t...
research
03/06/2021

Causal Reinforcement Learning: An Instrumental Variable Approach

In the standard data analysis framework, data is first collected (once f...
research
09/08/2020

Bayesian Inverse Reinforcement Learning for Collective Animal Movement

Agent-based methods allow for defining simple rules that generate comple...

Please sign up or login with your details

Forgot password? Click here to reset