Learning with Multiplicative Perturbations

12/04/2019
by   Xiulong Yang, et al.
0

Adversarial Training (AT) and Virtual Adversarial Training (VAT) are the regularization techniques that train Deep Neural Networks (DNNs) with adversarial examples generated by adding small but worst-case perturbations to input examples. In this paper, we propose xAT and xVAT, new adversarial training algorithms, that generate multiplicative perturbations to input examples for robust training of DNNs. Such perturbations are much more perceptible and interpretable than their additive counterparts exploited by AT and VAT. Furthermore, the multiplicative perturbations can be generated transductively or inductively while the standard AT and VAT only support a transductive implementation. We conduct a series of experiments that analyze the behavior of the multiplicative perturbations and demonstrate that xAT and xVAT match or outperform state-of-the-art classification accuracies across multiple established benchmarks while being about 30% faster than their additive counterparts. Furthermore, the resulting DNNs also demonstrate distinct weight distributions.

READ FULL TEXT
research
03/09/2020

Manifold Regularization for Adversarial Robustness

Manifold regularization is a technique that penalizes the complexity of ...
research
05/08/2023

TAPS: Connecting Certified and Adversarial Training

Training certifiably robust neural networks remains a notoriously hard p...
research
06/06/2019

Understanding Adversarial Behavior of DNNs by Disentangling Non-Robust and Robust Components in Performance Metric

The vulnerability to slight input perturbations is a worrying yet intrig...
research
02/27/2022

A Unified Wasserstein Distributional Robustness Framework for Adversarial Training

It is well-known that deep neural networks (DNNs) are susceptible to adv...
research
03/19/2021

Noise Modulation: Let Your Model Interpret Itself

Given the great success of Deep Neural Networks(DNNs) and the black-box ...
research
01/07/2023

Adversarial training with informed data selection

With the increasing amount of available data and advances in computing c...
research
08/07/2022

Adversarial Robustness Through the Lens of Convolutional Filters

Deep learning models are intrinsically sensitive to distribution shifts ...

Please sign up or login with your details

Forgot password? Click here to reset