A Review of Location Encoding for GeoAI: Methods and Applications

by   Gengchen Mai, et al.

A common need for artificial intelligence models in the broader geoscience is to represent and encode various types of spatial data, such as points (e.g., points of interest), polylines (e.g., trajectories), polygons (e.g., administrative regions), graphs (e.g., transportation networks), or rasters (e.g., remote sensing images), in a hidden embedding space so that they can be readily incorporated into deep learning models. One fundamental step is to encode a single point location into an embedding space, such that this embedding is learning-friendly for downstream machine learning models such as support vector machines and neural networks. We call this process location encoding. However, there lacks a systematic review on the concept of location encoding, its potential applications, and key challenges that need to be addressed. This paper aims to fill this gap. We first provide a formal definition of location encoding, and discuss the necessity of location encoding for GeoAI research from a machine learning perspective. Next, we provide a comprehensive survey and discussion about the current landscape of location encoding research. We classify location encoding models into different categories based on their inputs and encoding methods, and compare them based on whether they are parametric, multi-scale, distance preserving, and direction aware. We demonstrate that existing location encoding models can be unified under a shared formulation framework. We also discuss the application of location encoding for different types of spatial data. Finally, we point out several challenges in location encoding research that need to be solved in the future.



There are no comments yet.


page 1

page 2

page 3

page 4


Sphere2Vec: Multi-Scale Representation Learning over a Spherical Surface for Geospatial Predictions

Generating learning-friendly representations for points in a 2D space is...

Fock State-enhanced Expressivity of Quantum Machine Learning Models

The data-embedding process is one of the bottlenecks of quantum machine ...

A Survey on Location-Driven Influence Maximization

Influence Maximization (IM), which aims to select a set of users from a ...

Kernel Mean Embedding of Distributions: A Review and Beyond

A Hilbert space embedding of a distribution---in short, a kernel mean em...

Network representation learning systematic review: ancestors and current development state

Real-world information networks are increasingly occurring across variou...

Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells

Unsupervised text encoding models have recently fueled substantial progr...

Quasi-orthonormal Encoding for Machine Learning Applications

Most machine learning models, especially artificial neural networks, req...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.