Kernel Attention Transformer (KAT) for Histopathology Whole Slide Image Classification

06/27/2022
by   Yushan Zheng, et al.
13

Transformer has been widely used in histopathology whole slide image (WSI) classification for the purpose of tumor grading, prognosis analysis, etc. However, the design of token-wise self-attention and positional embedding strategy in the common Transformer limits the effectiveness and efficiency in the application to gigapixel histopathology images. In this paper, we propose a kernel attention Transformer (KAT) for histopathology WSI classification. The information transmission of the tokens is achieved by cross-attention between the tokens and a set of kernels related to a set of positional anchors on the WSI. Compared to the common Transformer structure, the proposed KAT can better describe the hierarchical context information of the local regions of the WSI and meanwhile maintains a lower computational complexity. The proposed method was evaluated on a gastric dataset with 2040 WSIs and an endometrial dataset with 2560 WSIs, and was compared with 6 state-of-the-art methods. The experimental results have demonstrated the proposed KAT is effective and efficient in the task of histopathology WSI classification and is superior to the state-of-the-art methods. The code is available at https://github.com/zhengyushan/kat.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2022

MatteFormer: Transformer-Based Image Matting via Prior-Tokens

In this paper, we propose a transformer-based image matting model called...
research
05/31/2021

MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens

Transformers have offered a new methodology of designing neural networks...
research
06/28/2021

Multi-Compound Transformer for Accurate Biomedical Image Segmentation

The recent vision transformer(i.e.for image classification) learns non-l...
research
06/06/2021

Transformer in Convolutional Neural Networks

We tackle the low-efficiency flaw of vision transformer caused by the hi...
research
10/25/2021

DocTr: Document Image Transformer for Geometric Unwarping and Illumination Correction

In this work, we propose a new framework, called Document Image Transfor...
research
11/30/2022

Pattern Attention Transformer with Doughnut Kernel

We present in this paper a new architecture, the Pattern Attention Trans...
research
11/25/2022

The Naughtyformer: A Transformer Understands Offensive Humor

Jokes are intentionally written to be funny, but not all jokes are creat...

Please sign up or login with your details

Forgot password? Click here to reset