Self-Supervised Masked Digital Elevation Models Encoding for Low-Resource Downstream Tasks

09/06/2023
by   Priyam Mazumdar, et al.
0

The lack of quality labeled data is one of the main bottlenecks for training Deep Learning models. As the task increases in complexity, there is a higher penalty for overfitting and unstable learning. The typical paradigm employed today is Self-Supervised learning, where the model attempts to learn from a large corpus of unstructured and unlabeled data and then transfer that knowledge to the required task. Some notable examples of self-supervision in other modalities are BERT for Large Language Models, Wav2Vec for Speech Recognition, and the Masked AutoEncoder for Vision, which all utilize Transformers to solve a masked prediction task. GeoAI is uniquely poised to take advantage of the self-supervised methodology due to the decades of data collected, little of which is precisely and dependably annotated. Our goal is to extract building and road segmentations from Digital Elevation Models (DEM) that provide a detailed topography of the earths surface. The proposed architecture is the Masked Autoencoder pre-trained on ImageNet (with the limitation that there is a large domain discrepancy between ImageNet and DEM) with an UperNet Head for decoding segmentations. We tested this model with 450 and 50 training images only, utilizing roughly 5 respectively. On the building segmentation task, this model obtains an 82.1 Intersection over Union (IoU) with 450 Images and 69.1 images. On the more challenging road detection task the model obtains an 82.7 IoU with 450 images and 73.2 made today about the earths surface will be immediately obsolete due to the constantly changing nature of the landscape. This motivates the clear necessity for data-efficient learners that can be used for a wide variety of downstream tasks.

READ FULL TEXT

page 2

page 3

page 4

page 5

research
11/26/2020

How Well Do Self-Supervised Models Transfer?

Self-supervised visual representation learning has seen huge progress in...
research
07/08/2021

Improved Language Identification Through Cross-Lingual Self-Supervised Learning

Language identification greatly impacts the success of downstream tasks ...
research
02/10/2023

Self-Supervised Learning-Based Cervical Cytology Diagnostics in Low-Data Regime and Low-Resource Setting

Screening Papanicolaou test samples effectively reduces cervical cancer-...
research
11/17/2022

Compressing Transformer-based self-supervised models for speech processing

Despite the success of Transformers in self-supervised learning with app...
research
04/05/2023

Self-Supervised Siamese Autoencoders

Fully supervised models often require large amounts of labeled training ...
research
12/04/2022

Joint Self-Supervised Image-Volume Representation Learning with Intra-Inter Contrastive Clustering

Collecting large-scale medical datasets with fully annotated samples for...
research
02/27/2023

EDMAE: An Efficient Decoupled Masked Autoencoder for Standard View Identification in Pediatric Echocardiography

We propose an efficient decoupled mask autoencoder (EDMAE) for standard ...

Please sign up or login with your details

Forgot password? Click here to reset