DeepAI AI Chat
Log In Sign Up

On the Vulnerability of CNN Classifiers in EEG-Based BCIs

by   Xiao Zhang, et al.
Huazhong University of Science u0026 Technology

Deep learning has been successfully used in numerous applications because of its outstanding performance and the ability to avoid manual feature engineering. One such application is electroencephalogram (EEG) based brain-computer interface (BCI), where multiple convolutional neural network (CNN) models have been proposed for EEG classification. However, it has been found that deep learning models can be easily fooled with adversarial examples, which are normal examples with small deliberate perturbations. This paper proposes an unsupervised fast gradient sign method (UFGSM) to attack three popular CNN classifiers in BCIs, and demonstrates its effectiveness. We also verify the transferability of adversarial examples in BCIs, which means we can perform attacks even without knowing the architecture and parameters of the target models, or the datasets they were trained on. To our knowledge, this is the first study on the vulnerability of CNN classifiers in EEG-based BCIs, and hopefully will trigger more attention on the security of BCI systems.


page 5

page 9


Universal Adversarial Perturbations for CNN Classifiers in EEG-Based BCIs

Multiple convolutional neural network (CNN) classifiers have been propos...

Active Learning for Black-Box Adversarial Attacks in EEG-Based Brain-Computer Interfaces

Deep learning has made significant breakthroughs in many fields, includi...

White-Box Target Attack for EEG-Based BCI Regression Problems

Machine learning has achieved great success in many applications, includ...

On the Transferability of Adversarial Examples Against CNN-Based Image Forensics

Recent studies have shown that Convolutional Neural Networks (CNN) are r...

Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning

Recent studies have shown that attackers can force deep learning models ...

Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks

Deep learning classifiers are known to be inherently vulnerable to manip...

Vulnerability analysis of captcha using Deep learning

Several websites improve their security and avoid dangerous Internet att...