SentiNet: Detecting Physical Attacks Against Deep Learning Systems

12/02/2018
by   Edward Chou, et al.
0

SentiNet is a novel detection framework for physical attacks on neural networks, a class of attacks that constrains an adversarial region to a visible portion of an image. Physical attacks have been shown to be robust and flexible techniques suited for deployment in real-world scenarios. Unlike most other adversarial detection works, SentiNet does not require training a model or preknowledge of an attack prior to detection. This attack-agnostic approach is appealing due to the large number of possible mechanisms and vectors of attack an attack-specific defense would have to consider. By leveraging the neural network's susceptibility to attacks and by using techniques from model interpretability and object detection as detection mechanisms, SentiNet turns a weakness of a model into a strength. We demonstrate the effectiveness of SentiNet on three different attacks - i.e., adversarial examples, data poisoning attacks, and trojaned networks - that have large variations in deployment mechanisms, and show that our defense is able to achieve very competitive performance metrics for all three threats, even against strong adaptive adversaries with full knowledge of SentiNet.

READ FULL TEXT

page 1

page 4

page 6

page 7

page 8

page 10

page 11

research
03/13/2021

Attack as Defense: Characterizing Adversarial Examples using Robustness

As a new programming paradigm, deep learning has expanded its applicatio...
research
11/22/2021

Medical Aegis: Robust adversarial protectors for medical images

Deep neural network based medical image systems are vulnerable to advers...
research
04/05/2021

Unified Detection of Digital and Physical Face Attacks

State-of-the-art defense mechanisms against face attacks achieve near pe...
research
11/25/2020

Adversarial Attack on Facial Recognition using Visible Light

The use of deep learning for human identification and object detection i...
research
05/22/2023

Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks

Physical world adversarial attack is a highly practical and threatening ...
research
03/04/2022

Dynamic Backdoors with Global Average Pooling

Outsourced training and machine learning as a service have resulted in n...
research
10/23/2020

Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries

Password security hinges on an accurate understanding of the techniques ...

Please sign up or login with your details

Forgot password? Click here to reset