BASAR:Black-box Attack on Skeletal Action Recognition

03/09/2021
by   Yunfeng Diao, et al.
0

Skeletal motion plays a vital role in human activity recognition as either an independent data source or a complement. The robustness of skeleton-based activity recognizers has been questioned recently, which shows that they are vulnerable to adversarial attacks when the full-knowledge of the recognizer is accessible to the attacker. However, this white-box requirement is overly restrictive in most scenarios and the attack is not truly threatening. In this paper, we show that such threats do exist under black-box settings too. To this end, we propose the first black-box adversarial attack method BASAR. Through BASAR, we show that adversarial attack is not only truly a threat but also can be extremely deceitful, because on-manifold adversarial samples are rather common in skeletal motions, in contrast to the common belief that adversarial samples only exist off-manifold. Through exhaustive evaluation and comparison, we show that BASAR can deliver successful attacks across models, data, and attack modes. Through harsh perceptual studies, we show that it achieves effective yet imperceptible attacks. By analyzing the attack on different activity recognizers, BASAR helps identify the potential causes of their vulnerability and provides insights on what classifiers are likely to be more robust against attack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2021

Understanding the Robustness of Skeleton-based Action Recognition under Adversarial Attack

Action recognition has been heavily employed in many applications such a...
research
04/04/2022

RobustSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition

Deep neural networks have empowered accurate device-free human activity ...
research
11/16/2019

SMART: Skeletal Motion Action Recognition aTtack

Adversarial attack has inspired great interest in computer vision, by sh...
research
08/01/2019

Black-box Adversarial ML Attack on Modulation Classification

Recently, many deep neural networks (DNN) based modulation classificatio...
research
11/25/2020

Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption

This work examines the vulnerability of multimodal (image + text) models...
research
02/20/2019

Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers

Deep neural networks provide unprecedented performance in all image clas...
research
06/17/2021

Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning

End-to-end encryption (E2EE) by messaging platforms enable people to sec...

Please sign up or login with your details

Forgot password? Click here to reset