A Stealthy Hardware Trojan Exploiting the Architectural Vulnerability of Deep Learning Architectures: Input Interception Attack (IIA)

11/02/2019
by   Tolulope A. Odetola, et al.
79

Deep learning architectures (DLA) have shown impressive performance in computer vision, natural language processing and so on. Many DLA make use of cloud computing to achieve classification due to the high computation and memory requirements. Privacy and latency concerns resulting from cloud computing has inspired the deployment of DLA on embedded hardware accelerators. To achieve short time-to-market and have access to global experts, state-of-the-art techniques of DLA deployment on hardware accelerators are outsourced to untrusted third parties. This outsourcing raises security concerns as hardware Trojans can be inserted into the hardware design of the mapped DLA of the hardware accelerator. We argue that existing hardware Trojan attacks highlighted in literature have no qualitative means how definite they are of the triggering of the Trojan. Also, most inserted Trojans show a obvious spike in the number of hardware resources utilized on the accelerator at the time of triggering the Trojan or when the payload is active. In this paper, we propose a hardware Trojan attack called Input Interception Attack (IIA). In this attack we make use of the statistical properties of layer-by-layer output to make sure that asides from being stealthy, our IIA is able to trigger with some measure of definiteness. This IIA attack is tested on DLA used to classify MNIST and Cifar-10 data sets. The attacked design utilizes approximately up to 2 designs. This paper also discusses potential defensive mechanisms that could be used to combat such hardware Trojans based attack in hardware accelerators for DLA.

READ FULL TEXT
research
06/13/2021

FeSHI: Feature Map Based Stealthy Hardware Intrinsic Attack

Convolutional Neural Networks (CNN) have shown impressive performance in...
research
05/14/2018

Hu-Fu: Hardware and Software Collaborative Attack Framework against Neural Networks

Recently, Deep Learning (DL), especially Convolutional Neural Network (C...
research
05/02/2022

Pre-RTL DNN Hardware Evaluator With Fused Layer Support

With the popularity of the deep neural network (DNN), hardware accelerat...
research
04/12/2023

Exploiting Logic Locking for a Neural Trojan Attack on Machine Learning Accelerators

Logic locking has been proposed to safeguard intellectual property (IP) ...
research
03/14/2022

Energy-Latency Attacks via Sponge Poisoning

Sponge examples are test-time inputs carefully-optimized to increase ene...
research
09/07/2020

Detection of Colluded Black-hole and Grey-hole attacks in Cloud Computing

The availability of the high-capacity network, massive storage, hardware...
research
08/19/2022

A Pragmatic Methodology for Blind Hardware Trojan Insertion in Finalized Layouts

A potential vulnerability for integrated circuits (ICs) is the insertion...

Please sign up or login with your details

Forgot password? Click here to reset