DBAT: Dynamic Backward Attention Transformer for Material Segmentation with Cross-Resolution Patches

05/06/2023
by   Yuwen Heng, et al.
0

The objective of dense material segmentation is to identify the material categories for every image pixel. Recent studies adopt image patches to extract material features. Although the trained networks can improve the segmentation performance, their methods choose a fixed patch resolution which fails to take into account the variation in pixel area covered by each material. In this paper, we propose the Dynamic Backward Attention Transformer (DBAT) to aggregate cross-resolution features. The DBAT takes cropped image patches as input and gradually increases the patch resolution by merging adjacent patches at each transformer stage, instead of fixing the patch resolution during training. We explicitly gather the intermediate features extracted from cross-resolution patches and merge them dynamically with predicted attention masks. Experiments show that our DBAT achieves an accuracy of 86.85 the best performance among state-of-the-art real-time models. Like other successful deep learning solutions with complex architectures, the DBAT also suffers from lack of interpretability. To address this problem, this paper examines the properties that the DBAT makes use of. By analysing the cross-resolution features and the attention weights, this paper interprets how the DBAT learns from image patches. We further align features to semantic labels, performing network dissection, to infer that the proposed model can extract material-related features better than other methods. We show that the DBAT model is more robust to network initialisation, and yields fewer variable predictions compared to other models. The project code is available at https://github.com/heng-yuwen/Dynamic-Backward-Attention-Transformer.

READ FULL TEXT

page 5

page 8

page 9

page 10

research
06/10/2021

CAT: Cross Attention in Vision Transformer

Since Transformer has found widespread use in NLP, the potential of Tran...
research
08/18/2023

Self-Calibrated Cross Attention Network for Few-Shot Segmentation

The key to the success of few-shot segmentation (FSS) lies in how to eff...
research
07/30/2021

DPT: Deformable Patch-based Transformer for Visual Recognition

Transformer has achieved great success in computer vision, while how to ...
research
05/11/2023

Cascaded Cross-Attention Networks for Data-Efficient Whole-Slide Image Classification Using Transformers

Whole-Slide Imaging allows for the capturing and digitization of high-re...
research
11/29/2022

Hierarchical Transformer for Survival Prediction Using Multimodality Whole Slide Images and Genomics

Learning good representation of giga-pixel level whole slide pathology i...
research
11/16/2022

Stare at What You See: Masked Image Modeling without Reconstruction

Masked Autoencoders (MAE) have been prevailing paradigms for large-scale...
research
06/08/2021

LocalTrans: A Multiscale Local Transformer Network for Cross-Resolution Homography Estimation

Cross-resolution image alignment is a key problem in multiscale gigapixe...

Please sign up or login with your details

Forgot password? Click here to reset