DeepAI AI Chat
Log In Sign Up

A Review of Location Encoding for GeoAI: Methods and Applications

by   Gengchen Mai, et al.

A common need for artificial intelligence models in the broader geoscience is to represent and encode various types of spatial data, such as points (e.g., points of interest), polylines (e.g., trajectories), polygons (e.g., administrative regions), graphs (e.g., transportation networks), or rasters (e.g., remote sensing images), in a hidden embedding space so that they can be readily incorporated into deep learning models. One fundamental step is to encode a single point location into an embedding space, such that this embedding is learning-friendly for downstream machine learning models such as support vector machines and neural networks. We call this process location encoding. However, there lacks a systematic review on the concept of location encoding, its potential applications, and key challenges that need to be addressed. This paper aims to fill this gap. We first provide a formal definition of location encoding, and discuss the necessity of location encoding for GeoAI research from a machine learning perspective. Next, we provide a comprehensive survey and discussion about the current landscape of location encoding research. We classify location encoding models into different categories based on their inputs and encoding methods, and compare them based on whether they are parametric, multi-scale, distance preserving, and direction aware. We demonstrate that existing location encoding models can be unified under a shared formulation framework. We also discuss the application of location encoding for different types of spatial data. Finally, we point out several challenges in location encoding research that need to be solved in the future.


page 1

page 2

page 3

page 4


Sphere2Vec: Multi-Scale Representation Learning over a Spherical Surface for Geospatial Predictions

Generating learning-friendly representations for points in a 2D space is...

A Survey on Location-Driven Influence Maximization

Influence Maximization (IM), which aims to select a set of users from a ...

Kernel Mean Embedding of Distributions: A Review and Beyond

A Hilbert space embedding of a distribution---in short, a kernel mean em...

Knowledge-integrated AutoEncoder Model

Data encoding is a common and central operation in most data analysis ta...

Learning from Hypervectors: A Survey on Hypervector Encoding

Hyperdimensional computing (HDC) is an emerging computing paradigm that ...

Quasi-orthonormal Encoding for Machine Learning Applications

Most machine learning models, especially artificial neural networks, req...