Hardware Trojan Attacks on Neural Networks

06/14/2018
by   Joseph Clements, et al.
0

With the rising popularity of machine learning and the ever increasing demand for computational power, there is a growing need for hardware optimized implementations of neural networks and other machine learning models. As the technology evolves, it is also plausible that machine learning or artificial intelligence will soon become consumer electronic products and military equipment, in the form of well-trained models. Unfortunately, the modern fabless business model of manufacturing hardware, while economic, leads to deficiencies in security through the supply chain. In this paper, we illuminate these security issues by introducing hardware Trojan attacks on neural networks, expanding the current taxonomy of neural network security to incorporate attacks of this nature. To aid in this, we develop a novel framework for inserting malicious hardware Trojans in the implementation of a neural network classifier. We evaluate the capabilities of the adversary in this setting by implementing the attack algorithm on convolutional neural networks while controlling a variety of parameters available to the adversary. Our experimental results show that the proposed algorithm could effectively classify a selected input trigger as a specified class on the MNIST dataset by injecting hardware Trojans into 0.03%, on average, of neurons in the 5th hidden layer of arbitrary 7-layer convolutional neural networks, while undetectable under the test data. Finally, we discuss the potential defenses to protect neural networks against hardware Trojan attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2019

Inference with Hybrid Bio-hardware Neural Networks

To understand the learning process in brains, biologically plausible alg...
research
03/27/2019

Rallying Adversarial Techniques against Deep Learning for Network Security

Recent advances in artificial intelligence and the increasing need for p...
research
02/24/2020

TrojanNet: Embedding Hidden Trojan Horse Models in Neural Networks

The complexity of large-scale neural networks can lead to poor understan...
research
07/06/2022

Enhancing Adversarial Attacks on Single-Layer NVM Crossbar-Based Neural Networks with Power Consumption Information

Adversarial attacks on state-of-the-art machine learning models pose a s...
research
01/19/2018

EffNet: An Efficient Structure for Convolutional Neural Networks

With the ever increasing application of Convolutional Neural Networks to...
research
09/15/2021

Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel

Neural network applications have become popular in both enterprise and p...
research
06/06/2018

DMOS-PUF: Dynamic Multi-key-selection Obfuscation for Strong PUFs against Machine Learning Attacks

Strong physical unclonable function (PUF) is a promising solution for de...

Please sign up or login with your details

Forgot password? Click here to reset