On Adversarial Robustness of Deep Image Deblurring

Recent approaches employ deep learning-based solutions for the recovery of a sharp image from its blurry observation. This paper introduces adversarial attacks against deep learning-based image deblurring methods and evaluates the robustness of these neural networks to untargeted and targeted attacks. We demonstrate that imperceptible distortion can significantly degrade the performance of state-of-the-art deblurring networks, even producing drastically different content in the output, indicating the strong need to include adversarially robust training not only in classification but also for image recovery.

READ FULL TEXT

page 1

page 3

page 4

research
01/31/2022

Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons

We identify fragile and robust neurons of deep learning architectures us...
research
04/12/2019

Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks

Single-image super-resolution aims to generate a high-resolution version...
research
04/17/2020

Adversarial Attack on Deep Learning-Based Splice Localization

Regarding image forensics, researchers have proposed various approaches ...
research
05/04/2023

ItoV: Efficiently Adapting Deep Learning-based Image Watermarking to Video Watermarking

Robust watermarking tries to conceal information within a cover image/vi...
research
12/22/2020

Modeling Deep Learning Based Privacy Attacks on Physical Mail

Mail privacy protection aims to prevent unauthorized access to hidden co...
research
08/24/2019

Targeted Mismatch Adversarial Attack: Query with a Flower to Retrieve the Tower

Access to online visual search engines implies sharing of private user c...
research
02/23/2019

A Deep, Information-theoretic Framework for Robust Biometric Recognition

Deep neural networks (DNN) have been a de facto standard for nowadays bi...

Please sign up or login with your details

Forgot password? Click here to reset