Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks

05/28/2019
by   Pu Zhao, et al.
0

Despite the great achievements of deep neural networks (DNNs), the vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many application domains requiring high reliability.We propose the fault sneaking attack on DNNs, where the adversary aims to misclassify certain input images into any target labels by modifying the DNN parameters. We apply ADMM (alternating direction method of multipliers) for solving the optimization problem of the fault sneaking attack with two constraints: 1) the classification of the other images should be unchanged and 2) the parameter modifications should be minimized. Specifically, the first constraint requires us not only to inject designated faults (misclassifications), but also to hide the faults for stealthy or sneaking considerations by maintaining model accuracy. The second constraint requires us to minimize the parameter modifications (using L0 norm to measure the number of modifications and L2 norm to measure the magnitude of modifications). Comprehensive experimental evaluation demonstrates that the proposed framework can inject multiple sneaking faults without losing the overall test accuracy performance.

READ FULL TEXT
research
07/31/2022

enpheeph: A Fault Injection Framework for Spiking and Compressed Deep Neural Networks

Research on Deep Neural Networks (DNNs) has focused on improving perform...
research
02/08/2023

CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques for Emerging Memories-Based Deep Neural Networks

Deep Neural Networks (DNNs) have emerged as the most effective programmi...
research
06/03/2019

Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks

Deep neural networks (DNNs) have been shown to tolerate "brain damage": ...
research
06/01/2021

Exposing Previously Undetectable Faults in Deep Neural Networks

Existing methods for testing DNNs solve the oracle problem by constraini...
research
02/21/2021

Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits

To explore the vulnerability of deep neural networks (DNNs), many attack...
research
02/13/2022

Neural Network Trojans Analysis and Mitigation from the Input Domain

Deep Neural Networks (DNNs) can learn Trojans (or backdoors) from benign...
research
07/25/2022

Versatile Weight Attack via Flipping Limited Bits

To explore the vulnerability of deep neural networks (DNNs), many attack...

Please sign up or login with your details

Forgot password? Click here to reset