A Cross Channel Context Model for Latents in Deep Image Compression

03/04/2021
by   Changyue Ma, et al.
0

This paper presents a cross channel context model for latents in deep image compression. Generally, deep image compression is based on an autoencoder framework, which transforms the original image to latents at the encoder and recovers the reconstructed image from the quantized latents at the decoder. The transform is usually combined with an entropy model, which estimates the probability distribution of the quantized latents for arithmetic coding. Currently, joint autoregressive and hierarchical prior entropy models are widely adopted to capture both the global contexts from the hyper latents and the local contexts from the quantized latent elements. For the local contexts, the widely adopted 2D mask convolution can only capture the spatial context. However, we observe that there are strong correlations between different channels in the latents. To utilize the cross channel correlations, we propose to divide the latents into several groups according to channel index and code the groups one by one, where previously coded groups are utilized to provide cross channel context for the current group. The proposed cross channel context model is combined with the joint autoregressive and hierarchical prior entropy model. Experimental results show that, using PSNR as the distortion metric, the combined model achieves BD-rate reductions of 6.30 entropy model, and 2.50 Versatile Video Coding (VVC) for the Kodak and CVPR CLIC2020 professional dataset, respectively. In addition, when optimized for the MS-SSIM metric, our approach generates visually more pleasant reconstructed images.

READ FULL TEXT

page 9

page 11

page 13

research
11/14/2022

Multi-Reference Entropy Model for Learned Image Compression

Recently, learned image compression has achieved remarkable performance....
research
09/08/2018

Joint Autoregressive and Hierarchical Priors for Learned Image Compression

Recent models for learned image compression are based on autoencoders, l...
research
07/12/2023

AICT: An Adaptive Image Compression Transformer

Motivated by the efficiency investigation of the Tranformer-based transf...
research
11/19/2020

Latent-Separated Global Prediction for Learned Image Compression

Over the past several years, we have witnessed the impressive progress o...
research
07/12/2023

ConvNeXt-ChARM: ConvNeXt-based Transform for Efficient Neural Image Compression

Over the last few years, neural image compression has gained wide attent...
research
07/05/2023

Joint Hierarchical Priors and Adaptive Spatial Resolution for Efficient Neural Image Compression

Recently, the performance of neural image compression (NIC) has steadily...
research
05/10/2020

Learning Context-Based Non-local Entropy Modeling for Image Compression

The entropy of the codes usually serves as the rate loss in the recent l...

Please sign up or login with your details

Forgot password? Click here to reset