SEDA: Self-Ensembling ViT with Defensive Distillation and Adversarial Training for robust Chest X-rays Classification

08/15/2023
by   Raza Imam, et al.
0

Deep Learning methods have recently seen increased adoption in medical imaging applications. However, elevated vulnerabilities have been explored in recent Deep Learning solutions, which can hinder future adoption. Particularly, the vulnerability of Vision Transformer (ViT) to adversarial, privacy, and confidentiality attacks raise serious concerns about their reliability in medical settings. This work aims to enhance the robustness of self-ensembling ViTs for the tuberculosis chest x-ray classification task. We propose Self-Ensembling ViT with defensive Distillation and Adversarial training (SEDA). SEDA utilizes efficient CNN blocks to learn spatial features with various levels of abstraction from feature representations extracted from intermediate ViT blocks, that are largely unaffected by adversarial perturbations. Furthermore, SEDA leverages adversarial training in combination with defensive distillation for improved robustness against adversaries. Training using adversarial examples leads to better model generalizability and improves its ability to handle perturbations. Distillation using soft probabilities introduces uncertainty and variation into the output probabilities, making it more difficult for adversarial and privacy attacks. Extensive experiments performed with the proposed architecture and training paradigm on publicly available Tuberculosis x-ray dataset shows SOTA efficacy of SEDA compared to SEViT in terms of computational efficiency with 70x times lighter framework and enhanced robustness of +9

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2022

Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image Classification

Vision Transformers (ViT) are competing to replace Convolutional Neural ...
research
05/14/2023

On enhancing the robustness of Vision Transformers: Defensive Diffusion

Privacy and confidentiality of medical data are of utmost importance in ...
research
07/08/2020

Fast Training of Deep Neural Networks Robust to Adversarial Perturbations

Deep neural networks are capable of training fast and generalizing well ...
research
03/14/2018

Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

Deep Neural Networks (DNNs) have achieved remarkable performance in a my...
research
05/20/2023

Annealing Self-Distillation Rectification Improves Adversarial Training

In standard adversarial training, models are optimized to fit one-hot la...
research
11/01/2022

Maximum Likelihood Distillation for Robust Modulation Classification

Deep Neural Networks are being extensively used in communication systems...
research
12/15/2022

On Evaluating Adversarial Robustness of Chest X-ray Classification: Pitfalls and Best Practices

Vulnerability to adversarial attacks is a well-known weakness of Deep Ne...

Please sign up or login with your details

Forgot password? Click here to reset