MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust Classification of Breast Cancer

05/02/2022
by   Shoukun Sun, et al.
0

Robust self-training (RST) can augment the adversarial robustness of image classification models without significantly sacrificing models' generalizability. However, RST and other state-of-the-art defense approaches failed to preserve the generalizability and reproduce their good adversarial robustness on small medical image sets. In this work, we propose the Multi-instance RST with a drop-max layer, namely MIRST-DM, which involves a sequence of iteratively generated adversarial instances during training to learn smoother decision boundaries on small datasets. The proposed drop-max layer eliminates unstable features and helps learn representations that are robust to image perturbations. The proposed approach was validated using a small breast ultrasound dataset with 1,190 images. The results demonstrate that the proposed approach achieves state-of-the-art adversarial robustness against three prevalent attacks.

READ FULL TEXT
research
09/27/2020

ESTAN: Enhanced Small Tumor-Aware Network for Breast Ultrasound Image Segmentation

Breast tumor segmentation is a critical task in computer-aided diagnosis...
research
09/23/2020

Automatic Breast Lesion Classification by Joint Neural Analysis of Mammography and Ultrasound

Mammography and ultrasound are extensively used by radiologists as compl...
research
02/20/2019

Adversarial Augmentation for Enhancing Classification of Mammography Images

Supervised deep learning relies on the assumption that enough training d...
research
05/04/2020

On the Benefits of Models with Perceptually-Aligned Gradients

Adversarial robust models have been shown to learn more robust and inter...
research
05/26/2019

Robust Classification using Robust Feature Augmentation

Existing deep neural networks, say for image classification, have been s...

Please sign up or login with your details

Forgot password? Click here to reset