A Compression Objective and a Cycle Loss for Neural Image Compression

05/24/2019
by   Caglar Aytekin, et al.
0

In this manuscript we propose two objective terms for neural image compression: a compression objective and a cycle loss. These terms are applied on the encoder output of an autoencoder and are used in combination with reconstruction losses. The compression objective encourages sparsity and low entropy in the activations. The cycle loss term represents the distortion between encoder outputs computed from the original image and from the reconstructed image (code-domain distortion). We train different autoencoders by using the compression objective in combination with different losses: a) MSE, b) MSE and MSSSIM, c) MSE, MS-SSIM and cycle loss. We observe that images encoded by these differently-trained autoencoders fall into different points of the perception-distortion curve (while having similar bit-rates). In particular, MSE-only training favors low image-domain distortion, whereas cycle loss training favors high perceptual quality.

READ FULL TEXT
research
06/05/2021

On Perceptual Lossy Compression: The Cost of Perceptual Reconstruction and An Optimal Training Framework

Lossy compression algorithms are typically designed to achieve the lowes...
research
01/23/2019

Rethinking Lossy Compression: The Rate-Distortion-Perception Tradeoff

Lossy compression algorithms are typically designed and analyzed through...
research
09/06/2023

EGIC: Enhanced Low-Bit-Rate Generative Image Compression Guided by Semantic Segmentation

We introduce EGIC, a novel generative image compression method that allo...
research
05/30/2023

On the Choice of Perception Loss Function for Learned Video Compression

We study causal, low-latency, sequential video compression when the outp...
research
12/18/2018

Hybrid Loss for Learning Single-Image-based HDR Reconstruction

This paper tackles high-dynamic-range (HDR) image reconstruction given o...
research
01/10/2020

Improving Image Autoencoder Embeddings with Perceptual Loss

Autoencoders are commonly trained using element-wise loss. However, elem...
research
10/26/2017

Image Compression: Sparse Coding vs. Bottleneck Autoencoders

Bottleneck autoencoders have been actively researched as a solution to i...

Please sign up or login with your details

Forgot password? Click here to reset