L^2C – Learning to Learn to Compress

07/31/2020
by   Nannan Zou, et al.
8

In this paper we present an end-to-end meta-learned system for image compression. Traditional machine learning based approaches to image compression train one or more neural network for generalization performance. However, at inference time, the encoder or the latent tensor output by the encoder can be optimized for each test image. This optimization can be regarded as a form of adaptation or benevolent overfitting to the input content. In order to reduce the gap between training and inference conditions, we propose a new training paradigm for learned image compression, which is based on meta-learning. In a first phase, the neural networks are trained normally. In a second phase, the Model-Agnostic Meta-learning approach is adapted to the specific case of image compression, where the inner-loop performs latent tensor overfitting, and the outer loop updates both encoder and decoder neural networks based on the overfitting performance. Furthermore, after meta-learning, we propose to overfit and cluster the bias terms of the decoder on training image patches, so that at inference time the optimal content-specific bias terms can be selected at encoder-side. Finally, we propose a new probability model for lossless compression, which combines concepts from both multi-scale and super-resolution probability model approaches. We show the benefits of all our proposed ideas via carefully designed experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2021

Online Meta Adaptation for Variable-Rate Learned Image Compression

This work addresses two major issues of end-to-end learned image compres...
research
06/22/2022

LECA: A Learned Approach for Efficient Cover-agnostic Watermarking

In this work, we present an efficient multi-bit deep image watermarking ...
research
03/03/2020

End-to-End Fast Training of Communication Links Without a Channel Model via Online Meta-Learning

When a channel model is not available, the end-to-end training of encode...
research
01/21/2021

Overfitting for Fun and Profit: Instance-Adaptive Data Compression

Neural data compression has been shown to outperform classical methods i...
research
04/19/2022

Metappearance: Meta-Learning for Visual Appearance Reproduction

There currently are two main approaches to reproducing visual appearance...
research
05/18/2022

Meta-Learning Sparse Compression Networks

Recent work in Deep Learning has re-imagined the representation of data ...
research
09/08/2021

Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Task Generalization

While artificial neural networks (ANNs) have been widely adopted in mach...

Please sign up or login with your details

Forgot password? Click here to reset