Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints

12/14/2020
by   Xin Li, et al.
0

Convolutional neural networks (CNNs) have achieved state-of-the-art performance on various tasks in computer vision. However, recent studies demonstrate that these models are vulnerable to carefully crafted adversarial samples and suffer from a significant performance drop when predicting them. Many methods have been proposed to improve adversarial robustness (e.g., adversarial training and new loss functions to learn adversarially robust feature representations). Here we offer a unique insight into the predictive behavior of CNNs that they tend to misclassify adversarial samples into the most probable false classes. This inspires us to propose a new Probabilistically Compact (PC) loss with logit constraints which can be used as a drop-in replacement for cross-entropy (CE) loss to improve CNN's adversarial robustness. Specifically, PC loss enlarges the probability gaps between true class and false classes meanwhile the logit constraints prevent the gaps from being melted by a small perturbation. We extensively compare our method with the state-of-the-art using large scale datasets under both white-box and black-box attacks to demonstrate its effectiveness. The source codes are available from the following url: https://github.com/xinli0928/PC-LC.

READ FULL TEXT

page 3

page 7

research
10/17/2020

A Stochastic Neural Network for Attack-Agnostic Adversarial Robustness

Stochastic Neural Networks (SNNs) that inject noise into their hidden la...
research
08/04/2022

Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image Classification

Vision Transformers (ViT) are competing to replace Convolutional Neural ...
research
12/20/2020

Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks

The Convolutional Neural Networks (CNNs) have emerged as a very powerful...
research
03/23/2019

Improving Adversarial Robustness via Guided Complement Entropy

Model robustness has been an important issue, since adding small adversa...
research
02/20/2020

Towards Certifiable Adversarial Sample Detection

Convolutional Neural Networks (CNNs) are deployed in more and more class...
research
05/06/2020

Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder

Whereas adversarial training is employed as the main defence strategy ag...
research
10/26/2020

Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy

Recently, convolutional neural networks (CNNs) have made significant adv...

Please sign up or login with your details

Forgot password? Click here to reset