Transfer Learning with Pretrained Remote Sensing Transformers

09/28/2022
by   Anthony Fuller, et al.
0

Although the remote sensing (RS) community has begun to pretrain transformers (intended to be fine-tuned on RS tasks), it is unclear how these models perform under distribution shifts. Here, we pretrain a new RS transformer–called SatViT-V2–on 1.3 million satellite-derived RS images, then fine-tune it (along with five other models) to investigate how it performs on distributions not seen during training. We split an expertly labeled land cover dataset into 14 datasets based on source biome. We train each model on each biome separately and test them on all other biomes. In all, this amounts to 1638 biome transfer experiments. After fine-tuning, we find that SatViT-V2 outperforms SatViT-V1 by 3.1 (mismatching biomes) data. Additionally, we find that initializing fine-tuning from the linear probed solution (i.e., leveraging LPFT [1]) improves SatViT-V2's performance by another 1.2 out-of-distribution data. Next, we find that pretrained RS transformers are better calibrated under distribution shifts than non-pretrained models and leveraging LPFT results in further improvements in model calibration. Lastly, we find that five measures of distribution shift are moderately correlated with biome transfer performance. We share code and pretrained model weights. (https://github.com/antofuller/SatViT)

READ FULL TEXT

page 2

page 3

page 4

page 6

page 8

page 9

research
10/26/2021

s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning

Pretrained bidirectional Transformers, such as BERT, have achieved signi...
research
12/12/2022

Parameter-Efficient Finetuning of Transformers for Source Code

Pretrained Transformers achieve state-of-the-art performance in various ...
research
02/04/2016

Towards Better Exploiting Convolutional Neural Networks for Remote Sensing Scene Classification

We present an analysis of three possible strategies for exploiting the p...
research
06/22/2020

The color out of space: learning self-supervised representations for Earth Observation imagery

The recent growth in the number of satellite images fosters the developm...
research
07/26/2021

Don't Sweep your Learning Rate under the Rug: A Closer Look at Cross-modal Transfer of Pretrained Transformers

Self-supervised pre-training of large-scale transformer models on text c...
research
08/23/2023

DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration

The visual models pretrained on large-scale benchmarks encode general kn...
research
03/14/2023

Eliciting Latent Predictions from Transformers with the Tuned Lens

We analyze transformers from the perspective of iterative inference, see...

Please sign up or login with your details

Forgot password? Click here to reset