Adversarial Transferability in Wearable Sensor Systems

03/17/2020
by   Ramesh Kumar Sah, et al.
0

Machine learning has increasingly become the most used approach for inference and decision making in wearable sensor systems. However, recent studies have found that machine learning systems are easily fooled by the addition of adversarial perturbation to their inputs. What is more interesting is that the adversarial examples generated for one machine learning system can also degrade the performance of another. This property of adversarial examples is called transferability. In this work, we take the first strides in studying adversarial transferability in wearable sensor systems, from the following perspectives: 1) Transferability between machine learning models, 2) Transferability across subjects, 3) Transferability across sensor locations, and 4) Transferability across datasets. With Human Activity Recognition (HAR) as an example sensor system, we found strong untargeted transferability in all cases of transferability. Specifically, gradient-based attacks were able to achieve higher misclassification rates compared to non-gradient attacks. The misclassification rate of untargeted adversarial examples ranged from 20 98 rate of adversarial examples was 100 the success rate for other types of targeted transferability ranged from 20 0 consequences not only in sensor systems but also across the broad spectrum of ubiquitous computing.

READ FULL TEXT

page 9

page 10

page 11

page 12

page 13

research
07/14/2019

Measuring the Transferability of Adversarial Examples

Adversarial examples are of wide concern due to their impact on the reli...
research
03/27/2023

Improving the Transferability of Adversarial Examples via Direction Tuning

In the transfer-based adversarial attacks, adversarial examples are only...
research
04/11/2017

The Space of Transferable Adversarial Examples

Adversarial examples are maliciously perturbed inputs designed to mislea...
research
12/29/2021

Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently

Deep neural networks are vulnerable to adversarial examples (AEs), which...
research
04/16/2019

Reducing Adversarial Example Transferability Using Gradient Regularization

Deep learning algorithms have increasingly been shown to lack robustness...
research
06/25/2020

Does Adversarial Transferability Indicate Knowledge Transferability?

Despite the immense success that deep neural networks (DNNs) have achiev...
research
12/03/2021

Attack-Centric Approach for Evaluating Transferability of Adversarial Samples in Machine Learning Models

Transferability of adversarial samples became a serious concern due to t...

Please sign up or login with your details

Forgot password? Click here to reset