Towards an Accurate and Secure Detector against Adversarial Perturbations

05/18/2023
by   Chao Wang, et al.
0

The vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community. From a security perspective, it poses a critical risk for modern vision systems, e.g., the popular Deep Learning as a Service (DLaaS) frameworks. For protecting off-the-shelf deep models while not modifying them, current algorithms typically detect adversarial patterns through discriminative decomposition of natural-artificial data. However, these decompositions are biased towards frequency or spatial discriminability, thus failing to capture subtle adversarial patterns comprehensively. More seriously, they are typically invertible, meaning successful defense-aware (secondary) adversarial attack (i.e., evading the detector as well as fooling the model) is practical under the assumption that the adversary is fully aware of the detector (i.e., the Kerckhoffs's principle). Motivated by such facts, we propose an accurate and secure adversarial example detector, relying on a spatial-frequency discriminative decomposition with secret keys. It expands the above works on two aspects: 1) the introduced Krawtchouk basis provides better spatial-frequency discriminability and thereby is more suitable for capturing adversarial patterns than the common trigonometric or wavelet basis; 2) the extensive parameters for decomposition are generated by a pseudo-random function with secret keys, hence blocking the defense-aware adversarial attack. Theoretical and numerical analysis demonstrates the increased accuracy and security of our detector w.r.t. a number of state-of-the-art algorithms.

READ FULL TEXT

page 2

page 5

page 7

page 9

page 10

page 13

page 15

research
02/14/2017

On Detecting Adversarial Perturbations

Machine learning and deep learning in particular has advanced tremendous...
research
12/02/2020

From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation

Despite recent advancements, deep neural networks are not robust against...
research
12/08/2021

SNEAK: Synonymous Sentences-Aware Adversarial Attack on Natural Language Video Localization

Natural language video localization (NLVL) is an important task in the v...
research
12/07/2020

Learning to Separate Clusters of Adversarial Representations for Robust Adversarial Detection

Although deep neural networks have shown promising performances on vario...
research
06/11/2023

Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework

With rich visual data, such as images, becoming readily associated with ...
research
05/18/2022

Passive Defense Against 3D Adversarial Point Clouds Through the Lens of 3D Steganalysis

Nowadays, 3D data plays an indelible role in the computer vision field. ...
research
09/19/2020

SecDD: Efficient and Secure Method for Remotely Training Neural Networks

We leverage what are typically considered the worst qualities of deep le...

Please sign up or login with your details

Forgot password? Click here to reset