The Devil Is in the Details: Window-based Attention for Image Compression

03/16/2022
by   Renjie Zou, et al.
0

Learned image compression methods have exhibited superior rate-distortion performance than classical image compression standards. Most existing learned image compression models are based on Convolutional Neural Networks (CNNs). Despite great contributions, a main drawback of CNN based model is that its structure is not designed for capturing local redundancy, especially the non-repetitive textures, which severely affects the reconstruction quality. Therefore, how to make full use of both global structure and local texture becomes the core problem for learning-based image compression. Inspired by recent progresses of Vision Transformer (ViT) and Swin Transformer, we found that combining the local-aware attention mechanism with the global-related feature learning could meet the expectation in image compression. In this paper, we first extensively study the effects of multiple kinds of attention mechanisms for local features learning, then introduce a more straightforward yet effective window-based local attention block. The proposed window-based attention is very flexible which could work as a plug-and-play component to enhance CNN and Transformer models. Moreover, we propose a novel Symmetrical TransFormer (STF) framework with absolute transformer blocks in the down-sampling encoder and up-sampling decoder. Extensive experimental evaluations have shown that the proposed method is effective and outperforms the state-of-the-art methods. The code is publicly available at https://github.com/Googolxx/STF.

READ FULL TEXT

page 1

page 4

page 8

research
03/27/2023

Learned Image Compression with Mixed Transformer-CNN Architectures

Learned image compression (LIC) methods have exhibited promising progres...
research
07/05/2022

FishFormer: Annulus Slicing-based Transformer for Fisheye Rectification with Efficacy Domain Exploration

Numerous significant progress on fisheye image rectification has been ac...
research
04/10/2023

High Dynamic Range Imaging with Context-aware Transformer

Avoiding the introduction of ghosts when synthesising LDR images as high...
research
06/06/2021

Uformer: A General U-Shaped Transformer for Image Restoration

In this paper, we present Uformer, an effective and efficient Transforme...
research
03/04/2022

Contextformer: A Transformer with Spatio-Channel Attention for Context Modeling in Learned Image Compression

Entropy modeling is a key component for high-performance image compressi...
research
09/19/2023

Multi-spectral Entropy Constrained Neural Compression of Solar Imagery

Missions studying the dynamic behaviour of the Sun are defined to captur...
research
10/12/2022

Attention-Based Generative Neural Image Compression on Solar Dynamics Observatory

NASA's Solar Dynamics Observatory (SDO) mission gathers 1.4 terabytes of...

Please sign up or login with your details

Forgot password? Click here to reset