DeepAI AI Chat
Log In Sign Up

Image Compression: Sparse Coding vs. Bottleneck Autoencoders

by   Yijing Watkins, et al.

Bottleneck autoencoders have been actively researched as a solution to image compression tasks. In this work, we explore the ability of sparse coding to improve reconstructed image quality for the same degree of compression. We observe that sparse image compression provides qualitatively superior visual quality of reconstructed images but has lower values of PSNR and SSIM compared to bottleneck autoencoders. We hypothesized that there should be another evaluational criterion to support our subjective observations. To test this hypothesis, we fed reconstructed images from both the bottleneck autoencoder and sparse coding into a DCNN classifier and discovered that the images reconstructed from the sparse coding compression obtained on average 1.5% higher classification accuracy compared to bottleneck autoencoders, implying that sparse coding preserves more content-relevant information.


Neural Network Compression using Transform Coding and Clustering

With the deployment of neural networks on mobile devices and the necessi...

Image Compression By Embedding Five Modulus Method Into JPEG

The standard JPEG format is almost the optimum format in image compressi...

A Compression Objective and a Cycle Loss for Neural Image Compression

In this manuscript we propose two objective terms for neural image compr...

Lossy Image Compression with Compressive Autoencoders

We propose a new approach to the problem of optimizing autoencoders for ...

Inference via Sparse Coding in a Hierarchical Vision Model

Sparse coding has been incorporated in models of the visual cortex for i...

Learning Representations by Maximizing Compression

We give an algorithm that learns a representation of data through compre...

Walking the Tightrope: An Investigation of the Convolutional Autoencoder Bottleneck

In this paper, we present an in-depth investigation of the convolutional...