Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles

11/22/2018
by   Edward Grefenstette, et al.
0

While deep learning has led to remarkable results on a number of challenging problems, researchers have discovered a vulnerability of neural networks in adversarial settings, where small but carefully chosen perturbations to the input can make the models produce extremely inaccurate outputs. This makes these models particularly unsuitable for safety-critical application domains (e.g. self-driving cars) where robustness is extremely important. Recent work has shown that augmenting training with adversarially generated data provides some degree of robustness against test-time attacks. In this paper we investigate how this approach scales as we increase the computational budget given to the defender. We show that increasing the number of parameters in adversarially-trained models increases their robustness, and in particular that ensembling smaller models while adversarially training the entire ensemble as a single model is a more efficient way of spending said budget than simply using a larger single model. Crucially, we show that it is the adversarial training of the ensemble, rather than the ensembling of adversarially trained models, which provides robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2017

Ensemble Adversarial Training: Attacks and Defenses

Machine learning models are vulnerable to adversarial examples, inputs m...
research
04/21/2020

EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks

Ensuring robustness of Deep Neural Networks (DNNs) is crucial to their a...
research
04/19/2022

Jacobian Ensembles Improve Robustness Trade-offs to Adversarial Attacks

Deep neural networks have become an integral part of our software infras...
research
02/11/2022

Towards Adversarially Robust Deepfake Detection: An Ensemble Approach

Detecting deepfakes is an important problem, but recent work has shown t...
research
06/11/2019

On Single Source Robustness in Deep Fusion Models

Algorithms that fuse multiple input sources benefit from both complement...
research
03/04/2023

Improved Robustness Against Adaptive Attacks With Ensembles and Error-Correcting Output Codes

Neural network ensembles have been studied extensively in the context of...
research
03/27/2021

Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial Robustness

Adversarial attacks have rendered high security risks on modern deep lea...

Please sign up or login with your details

Forgot password? Click here to reset