Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness

12/05/2021
by   Konstantinos P. Panousis, et al.
0

This work explores the potency of stochastic competition-based activations, namely Stochastic Local Winner-Takes-All (LWTA), against powerful (gradient-based) white-box and black-box adversarial attacks; we especially focus on Adversarial Training settings. In our work, we replace the conventional ReLU-based nonlinearities with blocks comprising locally and stochastically competing linear units. The output of each network layer now yields a sparse output, depending on the outcome of winner sampling in each block. We rely on the Variational Bayesian framework for training and inference; we incorporate conventional PGD-based adversarial training arguments to increase the overall adversarial robustness. As we experimentally show, the arising networks yield state-of-the-art robustness against powerful adversarial attacks while retaining very high classification rate in the benign case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/04/2021

Local Competition and Stochasticity for Adversarial Robustness in Deep Learning

This work addresses adversarial robustness in deep learning by consideri...
research
05/25/2019

Resisting Adversarial Attacks by k-Winners-Take-All

We propose a simple change to the current neural network structure for d...
research
01/10/2022

Competing Mutual Information Constraints with Stochastic Competition-based Activations for Learning Diversified Representations

This work aims to address the long-established problem of learning diver...
research
06/18/2020

Local Competition and Uncertainty for Adversarial Robustness in Deep Learning

This work attempts to address adversarial robustness of deep networks by...
research
08/23/2021

SegMix: Co-occurrence Driven Mixup for Semantic Segmentation and Adversarial Robustness

In this paper, we present a strategy for training convolutional neural n...
research
08/02/2022

Stochastic Deep Networks with Linear Competing Units for Model-Agnostic Meta-Learning

This work addresses meta-learning (ML) by considering deep networks with...
research
12/01/2020

Robustness Out of the Box: Compositional Representations Naturally Defend Against Black-Box Patch Attacks

Patch-based adversarial attacks introduce a perceptible but localized ch...

Please sign up or login with your details

Forgot password? Click here to reset