Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoning

05/18/2023
by   Elise Bishoff, et al.
4

In overhead image segmentation tasks, including additional spectral bands beyond the traditional RGB channels can improve model performance. However, it is still unclear how incorporating this additional data impacts model robustness to adversarial attacks and natural perturbations. For adversarial robustness, the additional information could improve the model's ability to distinguish malicious inputs, or simply provide new attack avenues and vulnerabilities. For natural perturbations, the additional information could better inform model decisions and weaken perturbation effects or have no significant influence at all. In this work, we seek to characterize the performance and robustness of a multispectral (RGB and near infrared) image segmentation model subjected to adversarial attacks and natural perturbations. While existing adversarial and natural robustness research has focused primarily on digital perturbations, we prioritize on creating realistic perturbations designed with physical world conditions in mind. For adversarial robustness, we focus on data poisoning attacks whereas for natural robustness, we focus on extending ImageNet-C common corruptions for fog and snow that coherently and self-consistently perturbs the input data. Overall, we find both RGB and multispectral models are vulnerable to data poisoning attacks regardless of input or fusion architectures and that while physically realizable natural perturbations still degrade model performance, the impact differs based on fusion architecture and input data.

READ FULL TEXT

page 4

page 5

page 11

page 12

research
07/09/2021

Towards Robust General Medical Image Segmentation

The reliability of Deep Learning systems depends on their accuracy but a...
research
05/01/2020

Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks

We evaluate machine comprehension models' robustness to noise and advers...
research
07/29/2021

Enhancing Adversarial Robustness via Test-time Transformation Ensembling

Deep learning models are prone to being fooled by imperceptible perturba...
research
05/11/2023

Distracting Downpour: Adversarial Weather Attacks for Motion Estimation

Current adversarial attacks on motion estimation, or optical flow, optim...
research
06/25/2023

A Spectral Perspective towards Understanding and Improving Adversarial Robustness

Deep neural networks (DNNs) are incredibly vulnerable to crafted, imperc...
research
07/27/2023

NSA: Naturalistic Support Artifact to Boost Network Confidence

Visual AI systems are vulnerable to natural and synthetic physical corru...
research
03/09/2023

Learning the Legibility of Visual Text Perturbations

Many adversarial attacks in NLP perturb inputs to produce visually simil...

Please sign up or login with your details

Forgot password? Click here to reset