Unveiling Signle-Bit-Flip Attacks on DNN Executables

09/12/2023
by   Yanzuo Chen, et al.
0

Recent research has shown that bit-flip attacks (BFAs) can manipulate deep neural networks (DNNs) via DRAM Rowhammer exploitations. Existing attacks are primarily launched over high-level DNN frameworks like PyTorch and flip bits in model weight files. Nevertheless, DNNs are frequently compiled into low-level executables by deep learning (DL) compilers to fully leverage low-level hardware primitives. The compiled code is usually high-speed and manifests dramatically distinct execution paradigms from high-level DNN frameworks. In this paper, we launch the first systematic study on the attack surface of BFA specifically for DNN executables compiled by DL compilers. We design an automated search tool to identify vulnerable bits in DNN executables and identify practical attack vectors that exploit the model structure in DNN executables with BFAs (whereas prior works make likely strong assumptions to attack model weights). DNN executables appear more "opaque" than models in high-level DNN frameworks. Nevertheless, we find that DNN executables contain extensive, severe (e.g., single-bit flip), and transferrable attack surfaces that are not present in high-level DNN models and can be exploited to deplete full model intelligence and control output labels. Our finding calls for incorporating security mechanisms in future DNN compilation toolchains.

READ FULL TEXT
research
03/30/2020

DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips

Security of machine learning is increasingly becoming a major concern du...
research
10/03/2022

Decompiling x86 Deep Neural Network Executables

Due to their widespread use on heterogeneous hardware devices, deep lear...
research
03/28/2019

Bit-Flip Attack: Crushing Neural Network withProgressive Bit Search

Several important security issues of Deep Neural Network (DNN) have been...
research
03/28/2019

Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search

Several important security issues of Deep Neural Network (DNN) have been...
research
06/23/2022

Design Exploration and Security Assessment of PUF-on-PUF Implementations

We design, implement, and assess the security of several variations of t...
research
06/03/2019

Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks

Deep neural networks (DNNs) have been shown to tolerate "brain damage": ...
research
09/29/2022

IvySyn: Automated Vulnerability Discovery for Deep Learning Frameworks

We present IvySyn: the first fully-automated framework for vulnerability...

Please sign up or login with your details

Forgot password? Click here to reset