SMART: Skeletal Motion Action Recognition aTtack

11/16/2019
by   Feixiang He, et al.
0

Adversarial attack has inspired great interest in computer vision, by showing that classification-based solutions are prone to imperceptible attack in many tasks. In this paper, we propose a method, SMART, to attack action recognizers which rely on 3D skeletal motions. Our method involves an innovative perceptual loss which ensures the imperceptibility of the attack. Empirical studies demonstrate that SMART is effective in both white-box and black-box scenarios. Its generalizability is evidenced on a variety of action recognizers and datasets. Its versatility is shown in different attacking strategies. Its deceitfulness is proven in extensive perceptual studies. Finally, SMART shows that adversarial attack on 3D skeletal motion, one type of time-series data, is significantly different from traditional adversarial attack problems.

READ FULL TEXT
research
03/09/2021

Understanding the Robustness of Skeleton-based Action Recognition under Adversarial Attack

Action recognition has been heavily employed in many applications such a...
research
03/09/2021

BASAR:Black-box Attack on Skeletal Action Recognition

Skeletal motion plays a vital role in human activity recognition as eith...
research
11/28/2018

Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers

The success of deep learning research has catapulted deep models into pr...
research
05/01/2021

A Perceptual Distortion Reduction Framework for Adversarial Perturbation Generation

Most of the adversarial attack methods suffer from large perceptual dist...
research
12/10/2021

Efficient Action Poisoning Attacks on Linear Contextual Bandits

Contextual bandit algorithms have many applicants in a variety of scenar...
research
08/30/2022

A Black-Box Attack on Optical Character Recognition Systems

Adversarial machine learning is an emerging area showing the vulnerabili...
research
03/17/2020

Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior

Deep neural networks are known to be susceptible to adversarial noise, w...

Please sign up or login with your details

Forgot password? Click here to reset