Traffic Forecasting on New Roads Unseen in the Training Data Using Spatial Contrastive Pre-Training

05/09/2023
by   Arian Prabowo, et al.
3

New roads are being constructed all the time. However, the capabilities of previous deep forecasting models to generalize to new roads not seen in the training data (unseen roads) are rarely explored. In this paper, we introduce a novel setup called a spatio-temporal (ST) split to evaluate the models' capabilities to generalize to unseen roads. In this setup, the models are trained on data from a sample of roads, but tested on roads not seen in the training data. Moreover, we also present a novel framework called Spatial Contrastive Pre-Training (SCPT) where we introduce a spatial encoder module to extract latent features from unseen roads during inference time. This spatial encoder is pre-trained using contrastive learning. During inference, the spatial encoder only requires two days of traffic data on the new roads and does not require any re-training. We also show that the output from the spatial encoder can be used effectively to infer latent node embeddings on unseen roads during inference time. The SCPT framework also incorporates a new layer, named the spatially gated addition (SGA) layer, to effectively combine the latent features from the output of the spatial encoder to existing backbones. Additionally, since there is limited data on the unseen roads, we argue that it is better to decouple traffic signals to trivial-to-capture periodic signals and difficult-to-capture Markovian signals, and for the spatial encoder to only learn the Markovian signals. Finally, we empirically evaluated SCPT using the ST split setup on four real-world datasets. The results showed that adding SCPT to a backbone consistently improves forecasting performance on unseen roads. More importantly, the improvements are greater when forecasting further into the future.

READ FULL TEXT
research
08/26/2021

Spatio-Temporal Graph Contrastive Learning

Deep learning models are modern tools for spatio-temporal graph (STG) fo...
research
08/25/2021

EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning

Given a set of unlabeled images or (image, text) pairs, contrastive lear...
research
05/24/2023

Pre-training Intent-Aware Encoders for Zero- and Few-Shot Intent Classification

Intent classification (IC) plays an important role in task-oriented dial...
research
09/06/2023

Spatio-Temporal Contrastive Self-Supervised Learning for POI-level Crowd Flow Inference

Accurate acquisition of crowd flow at Points of Interest (POIs) is pivot...
research
12/06/2021

CDGNet: A Cross-Time Dynamic Graph-based Deep Learning Model for Traffic Forecasting

Traffic forecasting is important in intelligent transportation systems o...
research
12/07/2020

Fine-grained Angular Contrastive Learning with Coarse Labels

Few-shot learning methods offer pre-training techniques optimized for ea...
research
07/26/2021

Towards the Unseen: Iterative Text Recognition by Distilling from Errors

Visual text recognition is undoubtedly one of the most extensively resea...

Please sign up or login with your details

Forgot password? Click here to reset