Exploring Self-Attention for Crop-type Classification Explainability

10/24/2022
by   Ivica Obadić, et al.
0

Automated crop-type classification using Sentinel-2 satellite time series is essential to support agriculture monitoring. Recently, deep learning models based on transformer encoders became a promising approach for crop-type classification. Using explainable machine learning to reveal the inner workings of these models is an important step towards improving stakeholders' trust and efficient agriculture monitoring. In this paper, we introduce a novel explainability framework that aims to shed a light on the essential crop disambiguation patterns learned by a state-of-the-art transformer encoder model. More specifically, we process the attention weights of a trained transformer encoder to reveal the critical dates for crop disambiguation and use domain knowledge to uncover the phenological events that support the model performance. We also present a sensitivity analysis approach to understand better the attention capability for revealing crop-specific phenological events. We report compelling results showing that attention patterns strongly relate to key dates, and consequently, to the critical phenological events for crop-type classification. These findings might be relevant for improving stakeholder trust and optimizing agriculture monitoring processes. Additionally, our sensitivity analysis demonstrates the limitation of attention weights for identifying the important events in the crop phenology as we empirically show that the unveiled phenological events depend on the other crops in the data considered during training.

READ FULL TEXT

page 1

page 4

page 7

page 8

page 13

page 14

research
02/24/2022

Attention Enables Zero Approximation Error

Deep learning models have been widely applied in various aspects of dail...
research
10/23/2019

Self-Attention for Raw Optical Satellite Time Series Classification

Deep learning methods have received increasing interest by the remote se...
research
05/13/2021

Paying Attention to Astronomical Transients: Photometric Classification with the Time-Series Transformer

Future surveys such as the Legacy Survey of Space and Time (LSST) of the...
research
07/05/2022

Swin Deformable Attention U-Net Transformer (SDAUT) for Explainable Fast MRI

Fast MRI aims to reconstruct a high fidelity image from partially observ...
research
05/11/2020

CrisisBERT: Robust Transformer for Crisis Classification and Contextual Crisis Embedding

Classification of crisis events, such as natural disasters, terrorist at...
research
03/02/2022

Satellite Image and Machine Learning based Knowledge Extraction in the Poverty and Welfare Domain

Recent advances in artificial intelligence and machine learning have cre...
research
08/28/2023

Attention Visualizer Package: Revealing Word Importance for Deeper Insight into Encoder-Only Transformer Models

This report introduces the Attention Visualizer package, which is crafte...

Please sign up or login with your details

Forgot password? Click here to reset