Review of security techniques for memristor computing systems

12/19/2022
by   Minhui Zou, et al.
0

Neural network (NN) algorithms have become the dominant tool in visual object recognition, natural language processing, and robotics. To enhance the computational efficiency of these algorithms, in comparison to the traditional von Neuman computing architectures, researchers have been focusing on memristor computing systems. A major drawback when using memristor computing systems today is that, in the artificial intelligence (AI) era, well-trained NN models are intellectual property and, when loaded in the memristor computing systems, face theft threats, especially when running in edge devices. An adversary may steal the well-trained NN models through advanced attacks such as learning attacks and side-channel analysis. In this paper, we review different security techniques for protecting memristor computing systems. Two threat models are described based on their assumptions regarding the adversary's capabilities: a black-box (BB) model and a white-box (WB) model. We categorize the existing security techniques into five classes in the context of these threat models: thwarting learning attacks (BB), thwarting side-channel attacks (BB), NN model encryption (WB), NN weight transformation (WB), and fingerprint embedding (WB). We also present a cross-comparison of the limitations of the security techniques. This paper could serve as an aid when designing secure memristor computing systems.

READ FULL TEXT
research
02/08/2018

PoTrojan: powerful neural-level trojan designs in deep learning models

With the popularity of deep learning (DL), artificial intelligence (AI) ...
research
08/17/2020

Artificial Neural Networks and Fault Injection Attacks

This chapter is on the security assessment of artificial intelligence (A...
research
05/03/2023

Data Privacy with Homomorphic Encryption in Neural Networks Training and Inference

The use of Neural Networks (NNs) for sensitive data processing is becomi...
research
09/15/2022

Statistical monitoring of models based on artificial intelligence

The rapid advancement of models based on artificial intelligence demands...
research
02/01/2019

The Efficacy of SHIELD under Different Threat Models

We study the efficacy of SHIELD in the face of alternative threat models...
research
05/04/2021

A Review of Confidentiality Threats Against Embedded Neural Network Models

Utilization of Machine Learning (ML) algorithms, especially Deep Neural ...
research
04/01/2022

Preventing Distillation-based Attacks on Neural Network IP

Neural networks (NNs) are already deployed in hardware today, becoming v...

Please sign up or login with your details

Forgot password? Click here to reset