Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip Finetune

04/13/2021
by   Shanshi Huang, et al.
0

Compute-in-memory (CIM) has been proposed to accelerate the convolution neural network (CNN) computation by implementing parallel multiply and accumulation in analog domain. However, the subsequent processing is still preferred to be performed in digital domain. This makes the analog to digital converter (ADC) critical in CIM architectures. One drawback is the ADC error introduced by process variation. While research efforts are being made to improve ADC design to reduce the offset, we find that the accuracy loss introduced by the ADC error could be recovered by model weight finetune. In addition to compensate ADC offset, on-chip weight finetune could be leveraged to provide additional protection for adversarial attack that aims to fool the inference engine with manipulated input samples. Our evaluation results show that by adapting the model weights to the specific ADC offset pattern to each chip, the transferability of the adversarial attack is suppressed. For a chip being attacked by the C W method, the classification for CIFAR-10 dataset will drop to almost 0 examples to other chips, the accuracy could still maintain more than 62 85

READ FULL TEXT
research
10/19/2021

PR-CIM: a Variation-Aware Binary-Neural-Network Framework for Process-Resilient Computation-in-memory

Binary neural networks (BNNs) that use 1-bit weights and activations hav...
research
03/13/2020

DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-chip Training

DNN+NeuroSim is an integrated framework to benchmark compute-in-memory (...
research
01/28/2019

A Simple Method to Reduce Off-chip Memory Accesses on Convolutional Neural Networks

For convolutional neural networks, a simple algorithm to reduce off-chip...
research
11/10/2021

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

Always-on TinyML perception tasks in IoT applications require very high ...
research
08/27/2020

Robustness Hidden in Plain Sight: Can Analog Computing Defend Against Adversarial Attacks?

The ever-increasing computational demand of Deep Learning has propelled ...
research
04/03/2023

X-TIME: An in-memory engine for accelerating machine learning on tabular data with CAMs

Structured, or tabular, data is the most common format in data science. ...
research
09/01/2022

Sparse Attention Acceleration with Synergistic In-Memory Pruning and On-Chip Recomputation

As its core computation, a self-attention mechanism gauges pairwise corr...

Please sign up or login with your details

Forgot password? Click here to reset