Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep Image-to-Image Models against Adversarial Attacks

04/30/2021
by   Jun-Ho Choi, et al.
0

Recently, the vulnerability of deep image classification models to adversarial attacks has been investigated. However, such an issue has not been thoroughly studied for image-to-image models that can have different characteristics in quantitative evaluation, consequences of attacks, and defense strategy. To tackle this, we present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks. For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints such as output quality degradation due to attacks, transferability of adversarial examples across different tasks, and characteristics of perturbations. We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors, e.g., attack methods and task objectives. In addition, we analyze the effectiveness of conventional defense methods used for classification models in improving the robustness of the image-to-image models.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 11

page 12

page 14

page 15

research
07/05/2023

Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and their Impact

This chapter introduces the concept of adversarial attacks on image clas...
research
06/01/2021

Reconciliation of Statistical and Spatial Sparsity For Robust Image and Image-Set Classification

Recent image classification algorithms, by learning deep features from l...
research
07/09/2018

Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks

Recently, there have been several successful deep learning approaches fo...
research
05/01/2020

Defense of Word-level Adversarial Attacks via Random Substitution Encoding

The adversarial attacks against deep neural networks on computer version...
research
05/21/2019

DoPa: A Fast and Comprehensive CNN Defense Methodology against Physical Adversarial Attacks

Recently, Convolutional Neural Networks (CNNs) demonstrate a considerabl...
research
10/17/2019

LanCe: A Comprehensive and Lightweight CNN Defense Methodology against Physical Adversarial Attacks on Embedded Multimedia Applications

Recently, adversarial attacks can be applied to the physical world, caus...
research
09/08/2020

Fuzzy Unique Image Transformation: Defense Against Adversarial Attacks On Deep COVID-19 Models

Early identification of COVID-19 using a deep model trained on Chest X-R...

Please sign up or login with your details

Forgot password? Click here to reset