Online Meta Adaptation for Variable-Rate Learned Image Compression

11/16/2021
by   Wei Jiang, et al.
0

This work addresses two major issues of end-to-end learned image compression (LIC) based on deep neural networks: variable-rate learning where separate networks are required to generate compressed images with varying qualities, and the train-test mismatch between differentiable approximate quantization and true hard quantization. We introduce an online meta-learning (OML) setting for LIC, which combines ideas from meta learning and online learning in the conditional variational auto-encoder (CVAE) framework. By treating the conditional variables as meta parameters and treating the generated conditional features as meta priors, the desired reconstruction can be controlled by the meta parameters to accommodate compression with variable qualities. The online learning framework is used to update the meta parameters so that the conditional reconstruction is adaptively tuned for the current image. Through the OML mechanism, the meta parameters can be effectively updated through SGD. The conditional reconstruction is directly based on the quantized latent representation in the decoder network, and therefore helps to bridge the gap between the training estimation and true quantized latent distribution. Experiments demonstrate that our OML approach can be flexibly applied to different state-of-the-art LIC methods to achieve additional performance improvements with little computation and transmission overhead.

READ FULL TEXT

page 6

page 8

research
09/11/2019

Variable Rate Deep Image Compression With a Conditional Autoencoder

In this paper, we propose a novel variable-rate learned image compressio...
research
07/31/2020

L^2C – Learning to Learn to Compress

In this paper we present an end-to-end meta-learned system for image com...
research
06/22/2018

Virtual Codec Supervised Re-Sampling Network for Image Compression

In this paper, we propose an image re-sampling compression method by lea...
research
04/24/2020

Automatic low-bit hybrid quantization of neural networks through meta learning

Model quantization is a widely used technique to compress and accelerate...
research
09/18/2019

Meta-Neighborhoods

Traditional methods for training neural networks use training data just ...
research
08/21/2021

Fairness-Aware Online Meta-learning

In contrast to offline working fashions, two research paradigms are devi...
research
11/05/2018

Deep Multiple Description Coding by Learning Scalar Quantization

In this paper, we propose a deep multiple description coding framework, ...

Please sign up or login with your details

Forgot password? Click here to reset