Improved Detection of Adversarial Attacks via Penetration Distortion Maximization

11/03/2019
by   Shai Rozenberg, et al.
0

This paper is concerned with the defense of deep models against adversarial attacks. We developan adversarial detection method, which is inspired by the certificate defense approach, and capturesthe idea of separating class clusters in the embedding space to increase the margin. The resultingdefense is intuitive, effective, scalable, and can be integrated into any given neural classificationmodel. Our method demonstrates state-of-the-art (detection) performance under all threat models.

READ FULL TEXT
research
12/07/2018

Adversarial Attacks, Regression, and Numerical Stability Regularization

Adversarial attacks against neural networks in a regression setting are ...
research
09/23/2020

A Partial Break of the Honeypots Defense to Catch Adversarial Attacks

A recent defense proposes to inject "honeypots" into neural networks in ...
research
06/02/2020

Exploring the role of Input and Output Layers of a Deep Neural Network in Adversarial Defense

Deep neural networks are learning models having achieved state of the ar...
research
09/08/2020

Fuzzy Unique Image Transformation: Defense Against Adversarial Attacks On Deep COVID-19 Models

Early identification of COVID-19 using a deep model trained on Chest X-R...
research
05/16/2022

Robust Representation via Dynamic Feature Aggregation

Deep convolutional neural network (CNN) based models are vulnerable to t...
research
09/03/2023

Robust Adversarial Defense by Tensor Factorization

As machine learning techniques become increasingly prevalent in data ana...
research
07/20/2023

FACADE: A Framework for Adversarial Circuit Anomaly Detection and Evaluation

We present FACADE, a novel probabilistic and geometric framework designe...

Please sign up or login with your details

Forgot password? Click here to reset