GOLLIC: Learning Global Context beyond Patches for Lossless High-Resolution Image Compression

10/07/2022
by   Yuan Lan, et al.
0

Neural-network-based approaches recently emerged in the field of data compression and have already led to significant progress in image compression, especially in achieving a higher compression ratio. In the lossless image compression scenario, however, existing methods often struggle to learn a probability model of full-size high-resolution images due to the limitation of the computation source. The current strategy is to crop high-resolution images into multiple non-overlapping patches and process them independently. This strategy ignores long-term dependencies beyond patches, thus limiting modeling performance. To address this problem, we propose a hierarchical latent variable model with a global context to capture the long-term dependencies of high-resolution images. Besides the latent variable unique to each patch, we introduce shared latent variables between patches to construct the global context. The shared latent variables are extracted by a self-supervised clustering module inside the model's encoder. This clustering module assigns each patch the confidence that it belongs to any cluster. Later, shared latent variables are learned according to latent variables of patches and their confidence, which reflects the similarity of patches in the same cluster and benefits the global context modeling. Experimental results show that our global context model improves compression ratio compared to the engineered codecs and deep learning models on three benchmark high-resolution image datasets, DIV2K, CLIC.pro, and CLIC.mobile.

READ FULL TEXT

page 2

page 9

research
10/29/2022

Learning Dependencies of Discrete Speech Representations with Neural Hidden Markov Models

While discrete latent variable models have had great success in self-sup...
research
09/14/2020

High-Resolution Deep Image Matting

Image matting is a key technique for image and video editing and composi...
research
07/20/2022

NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis

In this paper, we present NUWA-Infinity, a generative model for infinite...
research
05/26/2023

Structured Latent Variable Models for Articulated Object Interaction

In this paper, we investigate a scenario in which a robot learns a low-d...
research
05/19/2016

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues

Sequential data often possesses a hierarchical structure with complex de...
research
04/26/2023

Streamlined Global and Local Features Combinator (SGLC) for High Resolution Image Dehazing

Image Dehazing aims to remove atmospheric fog or haze from an image. Alt...
research
03/12/2023

Raising The Limit Of Image Rescaling Using Auxiliary Encoding

Normalizing flow models using invertible neural networks (INN) have been...

Please sign up or login with your details

Forgot password? Click here to reset