Generating Individual Trajectories Using GPT-2 Trained from Scratch on Encoded Spatiotemporal Data

08/14/2023
by   Taizo Horikomi, et al.
0

Following Mizuno, Fujimoto, and Ishikawa's research (Front. Phys. 2022), we transpose geographical coordinates expressed in latitude and longitude into distinctive location tokens that embody positions across varied spatial scales. We encapsulate an individual daily trajectory as a sequence of tokens by adding unique time interval tokens to the location tokens. Using the architecture of an autoregressive language model, GPT-2, this sequence of tokens is trained from scratch, allowing us to construct a deep learning model that sequentially generates an individual daily trajectory. Environmental factors such as meteorological conditions and individual attributes such as gender and age are symbolized by unique special tokens, and by training these tokens and trajectories on the GPT-2 architecture, we can generate trajectories that are influenced by both environmental factors and individual attributes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2023

RecycleGPT: An Autoregressive Language Model with Recyclable Module

Existing large language models have to run K times to generate a sequenc...
research
09/06/2022

Token Multiplicity in Reversing Petri Nets Under the Individual Token Interpretation

Reversing Petri nets (RPNs) have recently been proposed as a net-basedap...
research
07/04/2021

Unified Autoregressive Modeling for Joint End-to-End Multi-Talker Overlapped Speech Recognition and Speaker Attribute Estimation

In this paper, we present a novel modeling method for single-channel mul...
research
05/08/2023

Token-level Fitting Issues of Seq2seq Models

Sequence-to-sequence (seq2seq) models have been widely used for natural ...
research
08/21/2019

"Mask and Infill" : Applying Masked Language Model to Sentiment Transfer

This paper focuses on the task of sentiment transfer on non-parallel tex...
research
05/27/2019

Levenshtein Transformer

Modern neural sequence generation models are built to either generate to...
research
11/16/2022

AdaMAE: Adaptive Masking for Efficient Spatiotemporal Learning with Masked Autoencoders

Masked Autoencoders (MAEs) learn generalizable representations for image...

Please sign up or login with your details

Forgot password? Click here to reset