Out of Distribution Detection and Adversarial Attacks on Deep Neural Networks for Robust Medical Image Analysis

07/10/2021
by   Anisie Uwimana1, et al.
0

Deep learning models have become a popular choice for medical image analysis. However, the poor generalization performance of deep learning models limits them from being deployed in the real world as robustness is critical for medical applications. For instance, the state-of-the-art Convolutional Neural Networks (CNNs) fail to detect adversarial samples or samples drawn statistically far away from the training distribution. In this work, we experimentally evaluate the robustness of a Mahalanobis distance-based confidence score, a simple yet effective method for detecting abnormal input samples, in classifying malaria parasitized cells and uninfected cells. Results indicated that the Mahalanobis confidence score detector exhibits improved performance and robustness of deep learning models, and achieves stateof-the-art performance on both out-of-distribution (OOD) and adversarial samples.

READ FULL TEXT

page 2

page 3

page 4

research
02/04/2021

Adversarial Robustness Study of Convolutional Neural Network for Lumbar Disk Shape Reconstruction from MR images

Machine learning technologies using deep neural networks (DNNs), especia...
research
07/10/2018

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

Detecting test samples drawn sufficiently far away from the training dis...
research
01/06/2023

Deep-learning models in medical image analysis: Detection of esophagitis from the Kvasir Dataset

Early detection of esophagitis is important because this condition can p...
research
11/25/2022

TrustGAN: Training safe and trustworthy deep learning models through generative adversarial networks

Deep learning models have been developed for a variety of tasks and are ...
research
06/09/2020

Towards an Intrinsic Definition of Robustness for a Classifier

The robustness of classifiers has become a question of paramount importa...
research
07/14/2023

Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation

It is imperative to ensure the robustness of deep learning models in cri...
research
10/26/2022

Adversarially Robust Medical Classification via Attentive Convolutional Neural Networks

Convolutional neural network-based medical image classifiers have been s...

Please sign up or login with your details

Forgot password? Click here to reset