Transformer Based Geocoding

01/02/2023
by   Yuval Solaz, et al.
0

In this paper, we formulate the problem of predicting a geolocation from free text as a sequence-to-sequence problem. Using this formulation, we obtain a geocoding model by training a T5 encoder-decoder transformer model using free text as an input and geolocation as an output. The geocoding model was trained on geo-tagged wikidump data with adaptive cell partitioning for the geolocation representation. All of the code including Rest-based application, dataset and model checkpoints used in this work are publicly available.

READ FULL TEXT

page 3

page 4

page 6

page 8

page 9

research
07/07/2021

Efficient Transformer for Direct Speech Translation

The advent of Transformer-based models has surpassed the barriers of tex...
research
11/06/2021

Transformer Based Bengali Chatbot Using General Knowledge Dataset

An AI chatbot provides an impressive response after learning from the tr...
research
02/25/2019

Pretraining-Based Natural Language Generation for Text Summarization

In this paper, we propose a novel pretraining-based encoder-decoder fram...
research
06/04/2018

NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition

Scene text recognition has attracted a great many researches for decades...
research
05/25/2021

Empirical Error Modeling Improves Robustness of Noisy Neural Sequence Labeling

Despite recent advances, standard sequence labeling systems often fail w...
research
06/09/2022

cycle text2face: cycle text-to-face gan via transformers

Text-to-face is a subset of text-to-image that require more complex arch...
research
03/21/2016

Incorporating Copying Mechanism in Sequence-to-Sequence Learning

We address an important problem in sequence-to-sequence (Seq2Seq) learni...

Please sign up or login with your details

Forgot password? Click here to reset