Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks

05/19/2020
by   Linhai Ma, et al.
0

Convolutional neural network (CNN) has surpassed traditional methods for med-ical image classification. However, CNN is vulnerable to adversarial attacks which may lead to disastrous consequences in medical applications. Although adversarial noises are usually generated by attack algorithms, white-noise-induced adversarial samples can exist, and therefore the threats are real. In this study, we propose a novel training method, named IMA, to improve the robust-ness of CNN against adversarial noises. During training, the IMA method in-creases the margins of training samples in the input space, i.e., moving CNN de-cision boundaries far away from the training samples to improve robustness. The IMA method is evaluated on four publicly available datasets under strong 100-PGD white-box adversarial attacks, and the results show that the proposed meth-od significantly improved CNN classification accuracy on noisy data while keep-ing a relatively high accuracy on clean data. We hope our approach may facilitate the development of robust applications in medical field.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2021

Adversarial Robustness Study of Convolutional Neural Network for Lumbar Disk Shape Reconstruction from MR images

Machine learning technologies using deep neural networks (DNNs), especia...
research
03/18/2022

Towards Robust 2D Convolution for Reliable Visual Recognition

2D convolution (Conv2d), which is responsible for extracting features fr...
research
06/02/2022

Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs for Medical Image Segmentation and Detection

Recent methods based on Deep Neural Networks (DNNs) have reached high ac...
research
03/14/2021

BreakingBED – Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks

Deploying convolutional neural networks (CNNs) for embedded applications...
research
10/08/2020

Improve Adversarial Robustness via Weight Penalization on Classification Layer

It is well-known that deep neural networks are vulnerable to adversarial...
research
04/28/2023

The Power of Typed Affine Decision Structures: A Case Study

TADS are a novel, concise white-box representation of neural networks. I...
research
08/26/2022

Robust Prototypical Few-Shot Organ Segmentation with Regularized Neural-ODEs

Despite the tremendous progress made by deep learning models in image se...

Please sign up or login with your details

Forgot password? Click here to reset