Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness

03/30/2023
by   Timothy Redgrave, et al.
0

Neural networks have been proven to be both highly effective within computer vision, and highly vulnerable to adversarial attacks. Consequently, as the use of neural networks increases due to their unrivaled performance, so too does the threat posed by adversarial attacks. In this work, we build towards addressing the challenge of adversarial robustness by exploring the relationship between the mini-batch size used during adversarial sample generation and the strength of the adversarial samples produced. We demonstrate that an increase in mini-batch size results in a decrease in the efficacy of the samples produced, and we draw connections between these observations and the phenomenon of vanishing gradients. Next, we formulate loss functions such that adversarial sample strength is not degraded by mini-batch size. Our findings highlight a potential risk for underestimating the true (practical) strength of adversarial attacks, and a risk of overestimating a model's robustness. We share our codes to let others replicate our experiments and to facilitate further exploration of the connections between batch size and adversarial sample strength.

READ FULL TEXT
research
05/21/2021

Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks

Progress in making neural networks more robust against adversarial attac...
research
07/14/2020

Multitask Learning Strengthens Adversarial Robustness

Although deep networks achieve strong accuracy on a range of computer vi...
research
07/06/2021

GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization

Deep learning is vulnerable to adversarial examples. Many defenses based...
research
04/22/2022

How Sampling Impacts the Robustness of Stochastic Neural Networks

Stochastic neural networks (SNNs) are random functions and predictions a...
research
05/16/2019

Fooling Computer Vision into Inferring the Wrong Body Mass Index

Recently it's been shown that neural networks can use images of human fa...
research
05/31/2018

Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders

Susceptibility of deep neural networks to adversarial attacks poses a ma...
research
09/20/2023

It's Simplex! Disaggregating Measures to Improve Certified Robustness

Certified robustness circumvents the fragility of defences against adver...

Please sign up or login with your details

Forgot password? Click here to reset