Effects of Loss Functions And Target Representations on Adversarial Robustness

12/01/2018
by   Sean Saito, et al.
0

Understanding and evaluating the robustness of neural networks against adversarial attacks is a subject of growing interest. Attacks proposed in the literature usually work with models that are trained to minimize cross-entropy loss and have softmax activations. In this work, we present interesting experimental results that suggest the importance of considering other loss functions and target representations. Specifically, (1) training on mean-squared error and (2) representing targets as codewords generated from a random codebook show a marked increase in robustness against targeted and untargeted attacks under white-box and black-box settings. Our results show an increase in accuracy against untargeted attacks of up to 98.7% and a decrease of targeted attack success rates of up to 99.8%. For our experiments, we use the DenseNet architecture trained on three datasets (CIFAR-10, MNIST, and Fashion-MNIST).

READ FULL TEXT

page 1

page 7

page 8

research
08/20/2020

Towards adversarial robustness with 01 loss neural networks

Motivated by the general robustness properties of the 01 loss we propose...
research
07/25/2018

Unbounded Output Networks for Classification

We proposed the expected energy-based restricted Boltzmann machine (EE-R...
research
06/29/2023

Group-based Robustness: A General Framework for Customized Robustness in the Real World

Machine-learning models are known to be vulnerable to evasion attacks th...
research
03/22/2023

Distribution-restrained Softmax Loss for the Model Robustness

Recently, the robustness of deep learning models has received widespread...
research
08/24/2019

Targeted Mismatch Adversarial Attack: Query with a Flower to Retrieve the Tower

Access to online visual search engines implies sharing of private user c...
research
03/14/2021

BreakingBED – Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks

Deploying convolutional neural networks (CNNs) for embedded applications...
research
03/23/2019

Improving Adversarial Robustness via Guided Complement Entropy

Model robustness has been an important issue, since adding small adversa...

Please sign up or login with your details

Forgot password? Click here to reset