Reconstruction Distortion of Learned Image Compression with Imperceptible Perturbations

06/01/2023
by   Yang Sui, et al.
0

Learned Image Compression (LIC) has recently become the trending technique for image transmission due to its notable performance. Despite its popularity, the robustness of LIC with respect to the quality of image reconstruction remains under-explored. In this paper, we introduce an imperceptible attack approach designed to effectively degrade the reconstruction quality of LIC, resulting in the reconstructed image being severely disrupted by noise where any object in the reconstructed images is virtually impossible. More specifically, we generate adversarial examples by introducing a Frobenius norm-based loss function to maximize the discrepancy between original images and reconstructed adversarial examples. Further, leveraging the insensitivity of high-frequency components to human vision, we introduce Imperceptibility Constraint (IC) to ensure that the perturbations remain inconspicuous. Experiments conducted on the Kodak dataset using various LIC models demonstrate effectiveness. In addition, we provide several findings and suggestions for designing future defenses.

READ FULL TEXT

page 4

page 5

research
12/16/2021

Towards Robust Neural Image Compression: Adversarial Attack and Model Finetuning

Deep neural network based image compression has been extensively studied...
research
11/30/2018

ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples

Deep neural networks (DNNs) have been demonstrated to be vulnerable to a...
research
10/26/2021

A Frequency Perspective of Adversarial Robustness

Adversarial examples pose a unique challenge for deep learning systems. ...
research
10/30/2017

Attacking the Madry Defense Model with L_1-based Adversarial Examples

The Madry Lab recently hosted a competition designed to test the robustn...
research
07/03/2019

Robust Synthesis of Adversarial Visual Examples Using a Deep Image Prior

We present a novel method for generating robust adversarial image exampl...
research
12/02/2020

Towards Imperceptible Adversarial Image Patches Based on Network Explanations

The vulnerability of deep neural networks (DNNs) for adversarial example...
research
04/02/2021

Defending Against Image Corruptions Through Adversarial Augmentations

Modern neural networks excel at image classification, yet they remain vu...

Please sign up or login with your details

Forgot password? Click here to reset