DeepAI AI Chat
Log In Sign Up

Improving Adversarial Robustness via Guided Complement Entropy

03/23/2019
by   Hao-Yun Chen, et al.
0

Model robustness has been an important issue, since adding small adversarial perturbations to images is sufficient to drive the model accuracy down to nearly zero. In this paper, we propose a new training objective "Guided Complement Entropy" (GCE) that has dual desirable effects: (a) neutralizing the predicted probabilities of incorrect classes, and (b) maximizing the predicted probability of the ground-truth class, particularly when (a) is achieved. Training with GCE encourages models to learn latent representations where samples of different classes form distinct clusters, which we argue, improves the model robustness against adversarial perturbations. Furthermore, compared with the state-of-the-arts trained with cross-entropy, same models trained with GCE achieve significant improvements on the robustness against white-box adversarial attacks, both with and without adversarial training. When no attack is present, training with GCE also outperforms cross-entropy in terms of model accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/04/2019

Complement Objective Training

Learning with a primary objective, such as softmax cross entropy for cla...
08/22/2021

Robustness-via-Synthesis: Robust Training with Generative Adversarial Perturbations

Upon the discovery of adversarial attacks, robust models have become obl...
09/04/2020

Imbalanced Image Classification with Complement Cross Entropy

Recently, deep learning models have achieved great success in computer v...
08/26/2021

Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks

Adversarial defenses train deep neural networks to be invariant to the i...
12/14/2020

Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints

Convolutional neural networks (CNNs) have achieved state-of-the-art perf...
06/16/2020

On sparse connectivity, adversarial robustness, and a novel model of the artificial neuron

Deep neural networks have achieved human-level accuracy on almost all pe...
12/01/2018

Effects of Loss Functions And Target Representations on Adversarial Robustness

Understanding and evaluating the robustness of neural networks against a...