Lossy Compression via Sparse Linear Regression: Performance under Minimum-distance Encoding

02/03/2012
by   Ramji Venkataramanan, et al.
0

We study a new class of codes for lossy compression with the squared-error distortion criterion, designed using the statistical framework of high-dimensional linear regression. Codewords are linear combinations of subsets of columns of a design matrix. Called a Sparse Superposition or Sparse Regression codebook, this structure is motivated by an analogous construction proposed recently by Barron and Joseph for communication over an AWGN channel. For i.i.d Gaussian sources and minimum-distance encoding, we show that such a code can attain the Shannon rate-distortion function with the optimal error exponent, for all distortions below a specified value. It is also shown that sparse regression codes are robust in the following sense: a codebook designed to compress an i.i.d Gaussian source of variance σ^2 with (squared-error) distortion D can compress any ergodic source of variance less than σ^2 to within distortion D. Thus the sparse regression ensemble retains many of the good covering properties of the i.i.d random Gaussian ensemble, while having having a compact representation in terms of a matrix whose size is a low-order polynomial in the block-length.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2012

Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding

We propose computationally efficient encoders and decoders for lossy com...
research
11/02/2019

Sparse Regression Codes

Developing computationally-efficient codes that approach the Shannon-the...
research
04/25/2018

The Dispersion of the Gauss-Markov Source

The Gauss-Markov source produces U_i = aU_i-1 + Z_i for i≥ 1, where U_0 ...
research
03/15/2018

Reconstructing Gaussian sources by spatial sampling

Consider a Gaussian memoryless multiple source with m components with jo...
research
02/02/2022

Shannon Bounds on Lossy Gray-Wyner Networks

The Gray-Wyner network subject to a fidelity criterion is studied. Upper...
research
01/16/2017

High-Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transition

We consider a sparse linear regression model Y=Xβ^*+W where X has a Gaus...
research
11/05/2020

Incremental Refinements and Multiple Descriptions with Feedback

It is well known that independent (separate) encoding of K correlated so...

Please sign up or login with your details

Forgot password? Click here to reset