Ptolemy: Architecture Support for Robust Deep Learning

08/23/2020
by   Yiming Gan, et al.
0

Deep learning is vulnerable to adversarial attacks, where carefully-crafted input perturbations could mislead a well-trained Deep Neural Network to produce incorrect results. Today's countermeasures to adversarial attacks either do not have capability to detect adversarial samples at inference time, or introduce prohibitively high overhead to be practical at inference time. We propose Ptolemy, an algorithm-architecture co-designed system that detects adversarial attacks at inference time with low overhead and high accuracy.We exploit the synergies between DNN inference and imperative program execution: an input to a DNN uniquely activates a set of neurons that contribute significantly to the inference output, analogous to the sequence of basic blocks exercised by an input in a conventional program. Critically, we observe that adversarial samples tend to activate distinctive paths from those of benign inputs. Leveraging this insight, we propose an adversarial sample detection framework, which uses canary paths generated from offline profiling to detect adversarial samples at runtime. The Ptolemy compiler along with the co-designed hardware enable efficient execution by exploiting the unique algorithmic characteristics. Extensive evaluations show that Ptolemy achieves higher or similar adversarial example detection accuracy than today's mechanisms with a much lower runtime (as low as 2

READ FULL TEXT

page 2

page 8

page 11

research
03/27/2023

EMShepherd: Detecting Adversarial Samples via Side-channel Leakage

Deep Neural Networks (DNN) are vulnerable to adversarial perturbations-s...
research
06/09/2021

HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks

Deep Neural Networks (DNNs) are employed in an increasing number of appl...
research
12/14/2018

Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing

Deep neural networks (DNN) have been shown to be useful in a wide range ...
research
07/31/2022

DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning

DNNs are known to be vulnerable to so-called adversarial attacks that ma...
research
10/17/2021

ECG-ATK-GAN: Robustness against Adversarial Attacks on ECG using Conditional Generative Adversarial Networks

Recently deep learning has reached human-level performance in classifyin...
research
07/09/2021

GGT: Graph-Guided Testing for Adversarial Sample Detection of Deep Neural Network

Deep Neural Networks (DNN) are known to be vulnerable to adversarial sam...
research
10/27/2018

Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples

Adversarial sample attacks perturb benign inputs to induce DNN misbehavi...

Please sign up or login with your details

Forgot password? Click here to reset