On the Vulnerability of CNN Classifiers in EEG-Based BCIs

03/31/2019
by   Xiao Zhang, et al.
0

Deep learning has been successfully used in numerous applications because of its outstanding performance and the ability to avoid manual feature engineering. One such application is electroencephalogram (EEG) based brain-computer interface (BCI), where multiple convolutional neural network (CNN) models have been proposed for EEG classification. However, it has been found that deep learning models can be easily fooled with adversarial examples, which are normal examples with small deliberate perturbations. This paper proposes an unsupervised fast gradient sign method (UFGSM) to attack three popular CNN classifiers in BCIs, and demonstrates its effectiveness. We also verify the transferability of adversarial examples in BCIs, which means we can perform attacks even without knowing the architecture and parameters of the target models, or the datasets they were trained on. To our knowledge, this is the first study on the vulnerability of CNN classifiers in EEG-based BCIs, and hopefully will trigger more attention on the security of BCI systems.

READ FULL TEXT

page 5

page 9

research
12/03/2019

Universal Adversarial Perturbations for CNN Classifiers in EEG-Based BCIs

Multiple convolutional neural network (CNN) classifiers have been propos...
research
11/07/2019

Active Learning for Black-Box Adversarial Attacks in EEG-Based Brain-Computer Interfaces

Deep learning has made significant breakthroughs in many fields, includi...
research
11/07/2019

White-Box Target Attack for EEG-Based BCI Regression Problems

Machine learning has achieved great success in many applications, includ...
research
11/05/2018

On the Transferability of Adversarial Examples Against CNN-Based Image Forensics

Recent studies have shown that Convolutional Neural Networks (CNN) are r...
research
08/01/2017

Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning

Recent studies have shown that attackers can force deep learning models ...
research
01/16/2017

Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks

Deep learning classifiers are known to be inherently vulnerable to manip...
research
02/18/2023

Vulnerability analysis of captcha using Deep learning

Several websites improve their security and avoid dangerous Internet att...

Please sign up or login with your details

Forgot password? Click here to reset