Improving Hyperspectral Adversarial Robustness using Ensemble Networks in the Presences of Multiple Attacks

10/28/2022
by   Nicholas Soucy, et al.
0

Semantic segmentation of hyperspectral images (HSI) has seen great strides in recent years by incorporating knowledge from deep learning RGB classification models. Similar to their classification counterparts, semantic segmentation models are vulnerable to adversarial examples and need adversarial training to counteract them. Traditional approaches to adversarial robustness focus on training or retraining a single network on attacked data, however, in the presence of multiple attacks these approaches decrease the performance compared to networks trained individually on each attack. To combat this issue we propose an Adversarial Discriminator Ensemble Network (ADE-Net) which focuses on attack type detection and adversarial robustness under a unified model to preserve per data-type weight optimally while robustifiying the overall network. In the proposed method, a discriminator network is used to separate data by attack type into their specific attack-expert ensemble network. Our approach allows for the presence of multiple attacks mixed together while also labeling attack types during testing. We experimentally show that ADE-Net outperforms the baseline, which is a single network adversarially trained under a mix of multiple attacks, for HSI Indian Pines, Kennedy Space, and Houston datasets.

READ FULL TEXT
research
07/25/2022

SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

Deep neural network-based image classifications are vulnerable to advers...
research
03/09/2022

CEU-Net: Ensemble Semantic Segmentation of Hyperspectral Images Using Clustering

Most semantic segmentation approaches of Hyperspectral images (HSIs) use...
research
10/06/2022

Towards Out-of-Distribution Adversarial Robustness

Adversarial robustness continues to be a major challenge for deep learni...
research
06/09/2021

Towards Defending against Adversarial Examples via Attack-Invariant Features

Deep neural networks (DNNs) are vulnerable to adversarial noise. Their a...
research
08/25/2021

Backdoor Attacks on Network Certification via Data Poisoning

Certifiers for neural networks have made great progress towards provable...
research
09/21/2020

Adversarial Training with Stochastic Weight Average

Adversarial training deep neural networks often experience serious overf...
research
10/07/2021

Improving Adversarial Robustness for Free with Snapshot Ensemble

Adversarial training, as one of the few certified defenses against adver...

Please sign up or login with your details

Forgot password? Click here to reset