RobustSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition

04/04/2022
by   Jianfei Yang, et al.
0

Deep neural networks have empowered accurate device-free human activity recognition, which has wide applications. Deep models can extract robust features from various sensors and generalize well even in challenging situations such as data-insufficient cases. However, these systems could be vulnerable to input perturbations, i.e. adversarial attacks. We empirically demonstrate that both black-box Gaussian attacks and modern adversarial white-box attacks can render their accuracies to plummet. In this paper, we firstly point out that such phenomenon can bring severe safety hazards to device-free sensing systems, and then propose a novel learning framework, RobustSense, to defend common attacks. RobustSense aims to achieve consistent predictions regardless of whether there exists an attack on its input or not, alleviating the negative effect of distribution perturbation caused by adversarial attacks. Extensive experiments demonstrate that our proposed method can significantly enhance the model robustness of existing deep models, overcoming possible attacks. The results validate that our method works well on wireless human activity recognition and person identification systems. To the best of our knowledge, this is the first work to investigate adversarial attacks and further develop a novel defense framework for wireless human activity recognition in mobile computing research.

READ FULL TEXT

page 2

page 5

page 8

research
03/09/2021

BASAR:Black-box Attack on Skeletal Action Recognition

Skeletal motion plays a vital role in human activity recognition as eith...
research
05/01/2019

POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm

Most deep learning models are easily vulnerable to adversarial attacks. ...
research
06/26/2020

Orthogonal Deep Models As Defense Against Black-Box Attacks

Deep learning has demonstrated state-of-the-art performance for a variet...
research
10/22/2020

Defense-guided Transferable Adversarial Attacks

Though deep neural networks perform challenging tasks excellently, they ...
research
07/26/2019

Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin

Deep models, while being extremely versatile and accurate, are vulnerabl...
research
09/08/2020

Fuzzy Unique Image Transformation: Defense Against Adversarial Attacks On Deep COVID-19 Models

Early identification of COVID-19 using a deep model trained on Chest X-R...

Please sign up or login with your details

Forgot password? Click here to reset