Self-Supervised Material and Texture Representation Learning for Remote Sensing Tasks

by   Peri Akiva, et al.

Self-supervised learning aims to learn image feature representations without the usage of manually annotated labels. It is often used as a precursor step to obtain useful initial network weights which contribute to faster convergence and superior performance of downstream tasks. While self-supervision allows one to reduce the domain gap between supervised and unsupervised learning without the usage of labels, the self-supervised objective still requires a strong inductive bias to downstream tasks for effective transfer learning. In this work, we present our material and texture based self-supervision method named MATTER (MATerial and TExture Representation Learning), which is inspired by classical material and texture methods. Material and texture can effectively describe any surface, including its tactile properties, color, and specularity. By extension, effective representation of material and texture can describe other semantic classes strongly associated with said material and texture. MATTER leverages multi-temporal, spatially aligned remote sensing imagery over unchanged regions to learn invariance to illumination and viewing angle as a mechanism to achieve consistency of material and texture representation. We show that our self-supervision pre-training method allows for up to 24.22 6.33 faster convergence on change detection, land cover classification, and semantic segmentation tasks.


page 4

page 6

page 7

page 8


Self-Supervised Learning of Remote Sensing Scene Representations Using Contrastive Multiview Coding

In recent years self-supervised learning has emerged as a promising cand...

Geographical Knowledge-driven Representation Learning for Remote Sensing Images

The proliferation of remote sensing satellites has resulted in a massive...

Semantic decoupled representation learning for remote sensing image change detection

Contemporary transfer learning-based methods to alleviate the data insuf...

Pruning Convolutional Neural Networks with Self-Supervision

Convolutional neural networks trained without supervision come close to ...

MarioNette: Self-Supervised Sprite Learning

Visual content often contains recurring elements. Text is made up of gly...

S3K: Self-Supervised Semantic Keypoints for Robotic Manipulation via Multi-View Consistency

A robot's ability to act is fundamentally constrained by what it can per...

Semantic-aware Dense Representation Learning for Remote Sensing Image Change Detection

Training deep learning-based change detection (CD) model heavily depends...

Please sign up or login with your details

Forgot password? Click here to reset