Lossy Compression for Lossless Prediction

06/21/2021
by   Yann Dubois, et al.
12

Most data is automatically collected and only ever "seen" by algorithms. Yet, data compressors preserve perceptual fidelity rather than just the information needed by algorithms performing downstream tasks. In this paper, we characterize the bit-rate required to ensure high performance on all predictive tasks that are invariant under a set of transformations, such as data augmentations. Based on our theory, we design unsupervised objectives for training neural compressors. Using these objectives, we train a generic image compressor that achieves substantial rate savings (more than 1000× on ImageNet) compared to JPEG on 8 datasets, without decreasing downstream classification performance.

READ FULL TEXT

page 1

page 7

research
01/29/2022

Semantic-assisted image compression

Conventional image compression methods typically aim at pixel-level cons...
research
02/05/2023

CIPER: Combining Invariant and Equivariant Representations Using Contrastive and Predictive Learning

Self-supervised representation learning (SSRL) methods have shown great ...
research
05/05/2023

BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks

Recently, the Segment Anything Model (SAM) has gained significant attent...
research
12/23/2019

Learning to Navigate Using Mid-Level Visual Priors

How much does having visual priors about the world (e.g. the fact that t...
research
06/16/2023

ALP: Action-Aware Embodied Learning for Perception

Current methods in training and benchmarking vision models exhibit an ov...
research
05/30/2021

Dynamic-Deep: ECG Task-Aware Compression

Monitoring medical data, e.g., Electrocardiogram (ECG) signals, is a com...
research
10/12/2022

GULP: a prediction-based metric between representations

Comparing the representations learned by different neural networks has r...

Please sign up or login with your details

Forgot password? Click here to reset