White-Box Target Attack for EEG-Based BCI Regression Problems

11/07/2019
by   Lubin Meng, et al.
0

Machine learning has achieved great success in many applications, including electroencephalogram (EEG) based brain-computer interfaces (BCIs). Unfortunately, many machine learning models are vulnerable to adversarial examples, which are crafted by adding deliberately designed perturbations to the original inputs. Many adversarial attack approaches for classification problems have been proposed, but few have considered target adversarial attacks for regression problems. This paper proposes two such approaches. More specifically, we consider white-box target attacks for regression problems, where we know all information about the regression model to be attacked, and want to design small perturbations to change the regression output by a pre-determined amount. Experiments on two BCI regression problems verified that both approaches are effective. Moreover, adversarial examples generated from both approaches are also transferable, which means that we can use adversarial examples generated from one known regression model to attack an unknown regression model, i.e., to perform black-box attacks. To our knowledge, this is the first study on adversarial attacks for EEG-based BCI regression problems, which calls for more attention on the security of BCI systems.

READ FULL TEXT
research
11/28/2022

Adversarial Artifact Detection in EEG-Based Brain-Computer Interfaces

Machine learning has achieved great success in electroencephalogram (EEG...
research
11/07/2019

Active Learning for Black-Box Adversarial Attacks in EEG-Based Brain-Computer Interfaces

Deep learning has made significant breakthroughs in many fields, includi...
research
06/01/2020

Adversarial Attacks on Classifiers for Eye-based User Modelling

An ever-growing body of work has demonstrated the rich information conte...
research
11/18/2022

Adversarial Stimuli: Attacking Brain-Computer Interfaces via Perturbed Sensory Events

Machine learning models are known to be vulnerable to adversarial pertur...
research
02/13/2019

The Odds are Odd: A Statistical Test for Detecting Adversarial Examples

We investigate conditions under which test statistics exist that can rel...
research
03/31/2019

On the Vulnerability of CNN Classifiers in EEG-Based BCIs

Deep learning has been successfully used in numerous applications becaus...
research
11/23/2022

Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners

This paper examines the robustness of deployed few-shot meta-learning sy...

Please sign up or login with your details

Forgot password? Click here to reset