Adversarial Robustness Study of Convolutional Neural Network for Lumbar Disk Shape Reconstruction from MR images

02/04/2021
by   Jiasong Chen, et al.
13

Machine learning technologies using deep neural networks (DNNs), especially convolutional neural networks (CNNs), have made automated, accurate, and fast medical image analysis a reality for many applications, and some DNN-based medical image analysis systems have even been FDA-cleared. Despite the progress, challenges remain to build DNNs as reliable as human expert doctors. It is known that DNN classifiers may not be robust to noises: by adding a small amount of noise to an input image, a DNN classifier may make a wrong classification of the noisy image (i.e., in-distribution adversarial sample), whereas it makes the right classification of the clean image. Another issue is caused by out-of-distribution samples that are not similar to any sample in the training set. Given such a sample as input, the output of a DNN will become meaningless. In this study, we investigated the in-distribution (IND) and out-of-distribution (OOD) adversarial robustness of a representative CNN for lumbar disk shape reconstruction from spine MR images. To study the relationship between dataset size and robustness to IND adversarial attacks, we used a data augmentation method to create training sets with different levels of shape variations. We utilized the PGD-based algorithm for IND adversarial attacks and extended it for OOD adversarial attacks to generate OOD adversarial samples for model testing. The results show that IND adversarial training can improve the CNN robustness to IND adversarial attacks, and larger training datasets may lead to higher IND robustness. However, it is still a challenge to defend against OOD adversarial attacks.

READ FULL TEXT

page 2

page 3

page 9

page 11

research
06/02/2022

Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs for Medical Image Segmentation and Detection

Recent methods based on Deep Neural Networks (DNNs) have reached high ac...
research
05/19/2020

Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks

Convolutional neural network (CNN) has surpassed traditional methods for...
research
07/10/2021

Out of Distribution Detection and Adversarial Attacks on Deep Neural Networks for Robust Medical Image Analysis

Deep learning models have become a popular choice for medical image anal...
research
10/13/2022

AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks through Accuracy Gradient

Adversarial training is exploited to develop a robust Deep Neural Networ...
research
08/19/2019

Human uncertainty makes classification more robust

The classification performance of deep neural networks has begun to asym...
research
03/27/2023

Diffusion Denoised Smoothing for Certified and Adversarial Robust Out-Of-Distribution Detection

As the use of machine learning continues to expand, the importance of en...
research
12/11/2019

Detecting and Correcting Adversarial Images Using Image Processing Operations and Convolutional Neural Networks

Deep neural networks (DNNs) have achieved excellent performance on sever...

Please sign up or login with your details

Forgot password? Click here to reset