Now You See It, Now You Dont: Adversarial Vulnerabilities in Computational Pathology

06/14/2021
by   Alex Foote, et al.
9

Deep learning models are routinely employed in computational pathology (CPath) for solving problems of diagnostic and prognostic significance. Typically, the generalization performance of CPath models is analyzed using evaluation protocols such as cross-validation and testing on multi-centric cohorts. However, to ensure that such CPath solutions are robust and safe for use in a clinical setting, a critical analysis of their predictive performance and vulnerability to adversarial attacks is required, which is the focus of this paper. Specifically, we show that a highly accurate model for classification of tumour patches in pathology images (AUC > 0.95) can easily be attacked with minimal perturbations which are imperceptible to lay humans and trained pathologists alike. Our analytical results show that it is possible to generate single-instance white-box attacks on specific input images with high success rate and low perturbation energy. Furthermore, we have also generated a single universal perturbation matrix using the training dataset only which, when added to unseen test images, results in forcing the trained neural network to flip its prediction labels with high confidence at a success rate of > 84 We systematically analyze the relationship between perturbation energy of an adversarial attack, its impact on morphological constructs of clinical significance, their perceptibility by a trained pathologist and saliency maps obtained using deep learning models. Based on our analysis, we strongly recommend that computational pathology models be critically analyzed using the proposed adversarial validation strategy prior to clinical adoption.

READ FULL TEXT
research
07/04/2019

Adversarial Attacks in Sound Event Classification

Adversarial attacks refer to a set of methods that perturb the input to ...
research
09/15/2021

Universal Adversarial Attack on Deep Learning Based Prognostics

Deep learning-based time series models are being extensively utilized in...
research
12/01/2018

FineFool: Fine Object Contour Attack via Attention

Machine learning models have been shown vulnerable to adversarial attack...
research
05/18/2020

Universalization of any adversarial attack using very few test examples

Deep learning models are known to be vulnerable not only to input-depend...
research
02/15/2018

ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction

With the excellent accuracy and feasibility, the Neural Networks have be...
research
02/12/2021

Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective

The booming interest in adversarial attacks stems from a misalignment be...
research
12/09/2021

Amicable Aid: Turning Adversarial Attack to Benefit Classification

While adversarial attacks on deep image classification models pose serio...

Please sign up or login with your details

Forgot password? Click here to reset