Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based Classifiers

06/10/2020
by   Fangfang Yang, et al.
0

Being an emerging class of in-memory computing architecture, brain-inspired hyperdimensional computing (HDC) mimics brain cognition and leverages random hypervectors (i.e., vectors with a dimensionality of thousands or even more) to represent features and to perform classification tasks. The unique hypervector representation enables HDC classifiers to exhibit high energy efficiency, low inference latency and strong robustness against hardware-induced bit errors. Consequently, they have been increasingly recognized as an appealing alternative to or even replacement of traditional deep neural networks (DNNs) for local on device classification, especially on low-power Internet of Things devices. Nonetheless, unlike their DNN counterparts, state-of-the-art designs for HDC classifiers are mostly security-oblivious, casting doubt on their safety and immunity to adversarial inputs. In this paper, we study for the first time adversarial attacks on HDC classifiers and highlight that HDC classifiers can be vulnerable to even minimally-perturbed adversarial samples. Concretely, using handwritten digit classification as an example, we construct a HDC classifier and formulate a grey-box attack problem, where an attacker's goal is to mislead the target HDC classifier to produce erroneous prediction labels while keeping the amount of added perturbation noise as little as possible. Then, we propose a modified genetic algorithm to generate adversarial samples within a reasonably small number of queries. Our results show that adversarial images generated by our algorithm can successfully mislead the HDC classifier to produce wrong prediction labels with a high probability (i.e., 78 we also present two defense strategies – adversarial training and retraining– to strengthen the security of HDC classifiers.

READ FULL TEXT

page 3

page 7

page 9

page 11

research
11/04/2022

Adversarial Defense via Neural Oscillation inspired Gradient Masking

Spiking neural networks (SNNs) attract great attention due to their low ...
research
02/09/2021

Target Training Does Adversarial Training Without Adversarial Samples

Neural network classifiers are vulnerable to misclassification of advers...
research
03/15/2021

HDTest: Differential Fuzz Testing of Brain-Inspired Hyperdimensional Computing

Brain-inspired hyperdimensional computing (HDC) is an emerging computati...
research
12/13/2019

Potential adversarial samples for white-box attacks

Deep convolutional neural networks can be highly vulnerable to small per...
research
03/09/2022

A Brain-Inspired Low-Dimensional Computing Classifier for Inference on Tiny Devices

By mimicking brain-like cognition and exploiting parallelism, hyperdimen...
research
11/02/2022

Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks

Machine-learning architectures, such as Convolutional Neural Networks (C...
research
07/20/2021

Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks

Deep neural network (DNN) classifiers are powerful tools that drive a br...

Please sign up or login with your details

Forgot password? Click here to reset