Fuzzy Unique Image Transformation: Defense Against Adversarial Attacks On Deep COVID-19 Models

by   Achyut Mani Tripathi, et al.

Early identification of COVID-19 using a deep model trained on Chest X-Ray and CT images has gained considerable attention from researchers to speed up the process of identification of active COVID-19 cases. These deep models act as an aid to hospitals that suffer from the unavailability of specialists or radiologists, specifically in remote areas. Various deep models have been proposed to detect the COVID-19 cases, but few works have been performed to prevent the deep models against adversarial attacks capable of fooling the deep model by using a small perturbation in image pixels. This paper presents an evaluation of the performance of deep COVID-19 models against adversarial attacks. Also, it proposes an efficient yet effective Fuzzy Unique Image Transformation (FUIT) technique that downsamples the image pixels into an interval. The images obtained after the FUIT transformation are further utilized for training the secure deep model that preserves high accuracy of the diagnosis of COVID-19 cases and provides reliable defense against the adversarial attacks. The experiments and results show the proposed model prevents the deep model against the six adversarial attacks and maintains high accuracy to classify the COVID-19 cases from the Chest X-Ray image and CT image Datasets. The results also recommend that a careful inspection is required before practically applying the deep models to diagnose the COVID-19 cases.


page 1

page 6

page 7

page 9


Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks

Under the epidemic of the novel coronavirus disease 2019 (COVID-19), che...

Improved Detection of Adversarial Attacks via Penetration Distortion Maximization

This paper is concerned with the defense of deep models against adversar...

Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks

Recently, there have been several successful deep learning approaches fo...

Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep Image-to-Image Models against Adversarial Attacks

Recently, the vulnerability of deep image classification models to adver...

RobustSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition

Deep neural networks have empowered accurate device-free human activity ...

Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin

Deep models, while being extremely versatile and accurate, are vulnerabl...

Adversarially Trained Model Compression: When Robustness Meets Efficiency

The robustness of deep models to adversarial attacks has gained signific...