Efficient Contextformer: Spatio-Channel Window Attention for Fast Context Modeling in Learned Image Compression

06/25/2023
by   A. Burakhan Koyuncu, et al.
0

In this work, we introduce Efficient Contextformer (eContextformer) for context modeling in lossy learned image compression, which is built upon our previous work, Contextformer. The eContextformer combines the recent advancements in efficient transformers and fast context models with the spatio-channel attention mechanism. The proposed model enables content-adaptive exploitation of the spatial and channel-wise latent dependencies for a high performance and efficient entropy modeling. By incorporating several innovations, the eContextformer features improved decoding speed, model complexity and rate-distortion performance over previous work. For instance, compared to Contextformer, the eContextformer requires 145x less model complexity, 210x less decoding speed and achieves higher average bit savings on the Kodak, CLIC2020 and Tecnick datasets. Compared to the standard Versatile Video Coding (VVC) Test Model (VTM) 16.2, the proposed model provides up to 17.1

READ FULL TEXT

page 1

page 2

page 3

page 6

research
03/04/2022

Contextformer: A Transformer with Spatio-Channel Attention for Context Modeling in Learned Image Compression

Entropy modeling is a key component for high-performance image compressi...
research
03/21/2022

ELIC: Efficient Learned Image Compression with Unevenly Grouped Space-Channel Contextual Adaptive Coding

Recently, learned image compression techniques have achieved remarkable ...
research
10/08/2022

Leveraging progressive model and overfitting for efficient learned image compression

Deep learning is overwhelmingly dominant in the field of computer vision...
research
07/17/2020

Channel-wise Autoregressive Entropy Models for Learned Image Compression

In learning-based approaches to image compression, codecs are developed ...
research
03/29/2021

Checkerboard Context Model for Efficient Learned Image Compression

For learned image compression, the autoregressive context model is prove...
research
07/28/2023

MLIC++: Linear Complexity Multi-Reference Entropy Modeling for Learned Image Compression

Recently, multi-reference entropy model has been proposed, which capture...
research
08/20/2020

Conditional Entropy Coding for Efficient Video Compression

We propose a very simple and efficient video compression framework that ...

Please sign up or login with your details

Forgot password? Click here to reset