AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks

02/19/2020
by   Xiao Wang, et al.
15

Designing effective defense against adversarial attacks is a crucial topic as deep neural networks have been proliferated rapidly in many security-critical domains such as malware detection and self-driving cars. Conventional defense methods, although shown to be promising, are largely limited by their single-source single-cost nature: The robustness promotion tends to plateau when the defenses are made increasingly stronger while the cost tends to amplify. In this paper, we study principles of designing multi-source and multi-cost schemes where defense performance is boosted from multiple defending components. Based on this motivation, we propose a multi-source and multi-cost defense scheme, Adversarially Trained Model Switching (AdvMS), that inherits advantages from two leading schemes: adversarial training and random model switching. We show that the multi-source nature of AdvMS mitigates the performance plateauing issue and the multi-cost nature enables improving robustness at a flexible and adjustable combination of costs over different factors which can better suit specific restrictions and needs in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2022

Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINO

This work conducts the first analysis on the robustness against adversar...
research
08/20/2019

Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses

Despite achieving remarkable success in various domains, recent studies ...
research
12/10/2020

An Empirical Review of Adversarial Defenses

From face recognition systems installed in phones to self-driving cars, ...
research
01/11/2020

Exploring and Improving Robustness of Multi Task Deep Neural Networks via Domain Agnostic Defenses

In this paper, we explore the robustness of the Multi-Task Deep Neural N...
research
06/15/2018

Non-Negative Networks Against Adversarial Attacks

Adversarial attacks against Neural Networks are a problem of considerabl...
research
06/03/2022

Gradient Obfuscation Checklist Test Gives a False Sense of Security

One popular group of defense techniques against adversarial attacks is b...
research
04/03/2021

Mitigating Gradient-based Adversarial Attacks via Denoising and Compression

Gradient-based adversarial attacks on deep neural networks pose a seriou...

Please sign up or login with your details

Forgot password? Click here to reset