Hierarchical Transformer with Spatio-Temporal Context Aggregation for Next Point-of-Interest Recommendation

09/04/2022
by   Jiayi Xie, et al.
0

Next point-of-interest (POI) recommendation is a critical task in location-based social networks, yet remains challenging due to a high degree of variation and personalization exhibited in user movements. In this work, we explore the latent hierarchical structure composed of multi-granularity short-term structural patterns in user check-in sequences. We propose a Spatio-Temporal context AggRegated Hierarchical Transformer (STAR-HiT) for next POI recommendation, which employs stacked hierarchical encoders to recursively encode the spatio-temporal context and explicitly locate subsequences of different granularities. More specifically, in each encoder, the global attention layer captures the spatio-temporal context of the sequence, while the local attention layer performed within each subsequence enhances subsequence modeling using the local context. The sequence partition layer infers positions and lengths of subsequences from the global context adaptively, such that semantics in subsequences can be well preserved. Finally, the subsequence aggregation layer fuses representations within each subsequence to form the corresponding subsequence representation, thereby generating a new sequence of higher-level granularity. The stacking of encoders captures the latent hierarchical structure of the check-in sequence, which is used to predict the next visiting POI. Extensive experiments on three public datasets demonstrate that the proposed model achieves superior performance whilst providing explanations for recommendations. Codes are available at https://github.com/JennyXieJiayi/STAR-HiT.

READ FULL TEXT

page 2

page 6

page 11

page 15

research
10/14/2022

STAR-Transformer: A Spatio-temporal Cross Attention Transformer for Human Action Recognition

In action recognition, although the combination of spatio-temporal video...
research
08/09/2023

Hierarchical Representations for Spatio-Temporal Visual Attention Modeling and Understanding

This PhD. Thesis concerns the study and development of hierarchical repr...
research
06/18/2018

Where to Go Next: A Spatio-temporal LSTM model for Next POI Recommendation

Next Point-of-Interest (POI) recommendation is of great value for both l...
research
11/23/2021

PhysFormer: Facial Video-based Physiological Measurement with Temporal Difference Transformer

Remote photoplethysmography (rPPG), which aims at measuring heart activi...
research
07/20/2023

GLSFormer: Gated - Long, Short Sequence Transformer for Step Recognition in Surgical Videos

Automated surgical step recognition is an important task that can signif...
research
03/03/2023

GETNext: Trajectory Flow Map Enhanced Transformer for Next POI Recommendation

Next POI recommendation intends to forecast users' immediate future move...
research
07/13/2021

ST-DETR: Spatio-Temporal Object Traces Attention Detection Transformer

We propose ST-DETR, a Spatio-Temporal Transformer-based architecture for...

Please sign up or login with your details

Forgot password? Click here to reset