Learn Robust Features via Orthogonal Multi-Path

10/23/2020
by   Kun Fang, et al.
4

It is now widely known that by adversarial attacks, clean images with invisible perturbations can fool deep neural networks. To defend adversarial attacks, we design a block containing multiple paths to learn robust features and the parameters of these paths are required to be orthogonal with each other. The so-called Orthogonal Multi-Path (OMP) block could be posed in any layer of a neural network. Via forward learning and backward correction, one OMP block makes the neural networks learn features that are appropriate for all the paths and hence are expected to be robust. With careful design and thorough experiments on e.g., the positions of imposing orthogonality constraint, and the trade-off between the variety and accuracy, the robustness of the neural networks is significantly improved. For example, under white-box PGD attack with l_∞ bound 8/255 (this is a fierce attack that can make the accuracy of many vanilla neural networks drop to nearly 10% on CIFAR10), VGG16 with the proposed OMP block could keep over 50% accuracy. For black-box attacks, neural networks equipped with an OMP block have accuracy over 80%. The performance under both white-box and black-box attacks is much better than the existing state-of-the-art adversarial defenders.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2018

Divide, Denoise, and Defend against Adversarial Attacks

Deep neural networks, although shown to be a successful class of machine...
research
05/31/2018

PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks

Deep learning systems have become ubiquitous in many aspects of our live...
research
10/14/2021

DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks

White-box Adversarial Example (AE) attacks towards Deep Neural Networks ...
research
08/30/2021

Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms

Backpropagation is the default algorithm for training deep neural networ...
research
03/11/2022

An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks

According to recent studies, the vulnerability of state-of-the-art Neura...
research
05/12/2019

Lack of Unique Factorization as a Tool in Block Cipher Cryptanalysis

Linear (or differential) cryptanalysis may seem dull topics for a mathem...
research
05/30/2023

What Can We Learn from Unlearnable Datasets?

In an era of widespread web scraping, unlearnable dataset methods have t...

Please sign up or login with your details

Forgot password? Click here to reset