DeepAI AI Chat
Log In Sign Up

Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training

by   Haichao Zhang, et al.

We introduce a feature scattering-based adversarial training approach for improving model robustness against adversarial attacks. Conventional adversarial training approaches leverage a supervised scheme (either targeted or non-targeted) in generating attacks for training, which typically suffer from issues such as label leaking as noted in recent works. Differently, the proposed approach generates adversarial images for training through feature scattering in the latent space, which is unsupervised in nature and avoids label leaking. More importantly, this new approach generates perturbed images in a collaborative fashion, taking the inter-sample relationships into consideration. We conduct analysis on model robustness and demonstrate the effectiveness of the proposed approach through extensively experiments on different datasets compared with state-of-the-art approaches.


MIXPGD: Hybrid Adversarial Training for Speech Recognition Systems

Automatic speech recognition (ASR) systems based on deep neural networks...

Joint Adversarial Training: Incorporating both Spatial and Pixel Attacks

Conventional adversarial training methods using attacks that manipulate ...

Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINO

This work conducts the first analysis on the robustness against adversar...

Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness

Adversarial attacks have been shown to be highly effective at degrading ...

Adversarial Image Generation and Training for Deep Convolutional Neural Networks

Deep convolutional neural networks (DCNNs) have achieved great success i...

Beyond cross-entropy: learning highly separable feature distributions for robust and accurate classification

Deep learning has shown outstanding performance in several applications ...

Achieving Model Robustness through Discrete Adversarial Training

Discrete adversarial attacks are symbolic perturbations to a language in...