Perceptual Deep Neural Networks: Adversarial Robustness through Input Recreation

Adversarial examples have shown that albeit highly accurate, models learned by machines, differently from humans,have many weaknesses. However, humans' perception is also fundamentally different from machines, because we do not see the signals which arrive at the retina but a rather complex recreation of them. In this paper, we explore how machines could recreate the input as well as investigate the benefits of such an augmented perception. In this regard, we propose Perceptual Deep Neural Networks (φDNN) which also recreate their own input before further processing. The concept is formalized mathematically and two variations of it are developed (one based on inpainting the whole image and the other based on a noisy resized super resolution recreation). Experiments reveal that φDNNs can reduce attacks' accuracy substantially, surpassing state-of-the-art defenses in 87 adversarial training variations and 100 other pre-processing type of defenses. Moreover, the recreation process intentionally corrupts the input image. Interestingly, we show by ablation tests that corrupting the input is, although counter-intuitive,beneficial. This suggests that the blind-spot in vertebrates might also be, analogously, the precursor of visual robustness. Thus, φDNNs reveal that input recreation has strong benefits for artificial neural networks similar to biological ones, shedding light into the importance of the blind-spot and starting an area of perception models for robust recognition in artificial intelligence.

READ FULL TEXT

page 7

page 13

page 14

page 15

page 16

page 17

research
11/28/2020

Generalized Adversarial Examples: Attacks and Defenses

Most of the works follow such definition of adversarial example that is ...
research
07/03/2021

Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity

Deep neural networks (DNNs) have been found to be vulnerable to adversar...
research
06/10/2021

Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training

Deep neural networks (DNNs) are vulnerable to adversarial noise. A range...
research
06/27/2019

On the notion of number in humans and machines

In this paper, we performed two types of software experiments to study t...
research
11/26/2017

Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients

Deep neural networks have proven remarkably effective at solving many cl...
research
05/23/2018

Robust Perception through Analysis by Synthesis

The intriguing susceptibility of deep neural networks to minimal input p...
research
08/14/2023

Robustified ANNs Reveal Wormholes Between Human Category Percepts

The visual object category reports of artificial neural networks (ANNs) ...

Please sign up or login with your details

Forgot password? Click here to reset