Open DNN Box by Power Side-Channel Attack

07/21/2019
by   Yun Xiang, et al.
0

Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable, i.e., they are black-box. Therefore, in this work, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) we are the first to use side-channel information to reveal internal network architecture in embedded devices; (2) we are the first to construct models for internal parameter estimation; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50% accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI applications, and further propose corresponding defensive strategies in the future.

READ FULL TEXT

page 1

page 5

page 6

page 7

research
02/02/2018

Hardening Deep Neural Networks via Adversarial Model Cascades

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...
research
03/14/2021

BreakingBED – Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks

Deploying convolutional neural networks (CNNs) for embedded applications...
research
11/06/2017

Whitening Black-Box Neural Networks

Many deployed learned models are black boxes: given input, returns outpu...
research
10/10/2021

Universal Adversarial Attacks on Neural Networks for Power Allocation in a Massive MIMO System

Deep learning (DL) architectures have been successfully used in many app...
research
12/11/2021

MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare

Deep neural networks (DNNs) have been broadly adopted in health risk pre...
research
03/07/2023

Patch of Invisibility: Naturalistic Black-Box Adversarial Attacks on Object Detectors

Adversarial attacks on deep-learning models have been receiving increase...
research
11/17/2022

Towards Good Practices in Evaluating Transfer Adversarial Attacks

Transfer adversarial attacks raise critical security concerns in real-wo...

Please sign up or login with your details

Forgot password? Click here to reset