Adversarial Attacks on Classifiers for Eye-based User Modelling

06/01/2020
by   Inken Hagestedt, et al.
0

An ever-growing body of work has demonstrated the rich information content available in eye movements for user modelling, e.g. for predicting users' activities, cognitive processes, or even personality traits. We show that state-of-the-art classifiers for eye-based user modelling are highly vulnerable to adversarial examples: small artificial perturbations in gaze input that can dramatically change a classifier's predictions. We generate these adversarial examples using the Fast Gradient Sign Method (FGSM) that linearises the gradient to find suitable perturbations. On the sample task of eye-based document type recognition we study the success of different adversarial attack scenarios: with and without knowledge about classifier gradients (white-box vs. black-box) as well as with and without targeting the attack to a specific class, In addition, we demonstrate the feasibility of defending against adversarial attacks by adding adversarial examples to a classifier's training data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2022

Adversarial Attacks and Defense Methods for Power Quality Recognition

Vulnerability of various machine learning methods to adversarial example...
research
11/07/2019

White-Box Target Attack for EEG-Based BCI Regression Problems

Machine learning has achieved great success in many applications, includ...
research
09/11/2019

Sparse and Imperceivable Adversarial Attacks

Neural networks have been proven to be vulnerable to a variety of advers...
research
08/14/2023

White-Box Adversarial Attacks on Deep Learning-Based Radio Frequency Fingerprint Identification

Radio frequency fingerprint identification (RFFI) is an emerging techniq...
research
04/20/2022

Adversarial Scratches: Deployable Attacks to CNN Classifiers

A growing body of work has shown that deep neural networks are susceptib...
research
05/19/2020

Adversarial Attacks for Embodied Agents

Adversarial attacks are valuable for providing insights into the blind-s...
research
03/01/2021

Brain Programming is Immune to Adversarial Attacks: Towards Accurate and Robust Image Classification using Symbolic Learning

In recent years, the security concerns about the vulnerability of Deep C...

Please sign up or login with your details

Forgot password? Click here to reset