Towards Robust Neural Image Compression: Adversarial Attack and Model Finetuning

12/16/2021
by   Tong Chen, et al.
5

Deep neural network based image compression has been extensively studied. Model robustness is largely overlooked, though it is crucial to service enabling. We perform the adversarial attack by injecting a small amount of noise perturbation to original source images, and then encode these adversarial examples using prevailing learnt image compression models. Experiments report severe distortion in the reconstruction of adversarial examples, revealing the general vulnerability of existing methods, regardless of the settings used in underlying compression model (e.g., network architecture, loss function, quality scale) and optimization strategy used for injecting perturbation (e.g., noise threshold, signal distance measurement). Later, we apply the iterative adversarial finetuning to refine pretrained models. In each iteration, random source images and adversarial examples are mixed to update underlying model. Results show the effectiveness of the proposed finetuning strategy by substantially improving the compression model robustness. Overall, our methodology is simple, effective, and generalizable, making it attractive for developing robust learnt image compression solution. All materials have been made publicly accessible at https://njuvision.github.io/RobustNIC for reproducible research.

READ FULL TEXT

page 2

page 3

page 4

page 6

page 8

page 9

page 10

page 12

research
10/20/2022

Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models

Traditional (fickle) adversarial examples involve finding a small pertur...
research
06/01/2023

Reconstruction Distortion of Learned Image Compression with Imperceptible Perturbations

Learned Image Compression (LIC) has recently become the trending techniq...
research
06/14/2021

Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks

Although the adoption rate of deep neural networks (DNNs) has tremendous...
research
03/09/2023

Decision-BADGE: Decision-based Adversarial Batch Attack with Directional Gradient Estimation

The vulnerability of deep neural networks to adversarial examples has le...
research
08/25/2021

Improving Visual Quality of Unrestricted Adversarial Examples with Wavelet-VAE

Traditional adversarial examples are typically generated by adding pertu...
research
07/30/2019

Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

Deep learning models, which are increasingly being used in the field of ...
research
01/18/2022

Adversarial vulnerability of powerful near out-of-distribution detection

There has been a significant progress in detecting out-of-distribution (...

Please sign up or login with your details

Forgot password? Click here to reset