Evaluating Adversarial Robustness of Convolution-based Human Motion Prediction

06/21/2023
by   Chengxu Duan, et al.
0

Human motion prediction has achieved a brilliant performance with the help of CNNs, which facilitates human-machine cooperation. However, currently, there is no work evaluating the potential risk in human motion prediction when facing adversarial attacks, which may cause danger in real applications. The adversarial attack will face two problems against human motion prediction: 1. For naturalness, pose data is highly related to the physical dynamics of human skeletons where Lp norm constraints cannot constrain the adversarial example well; 2. Unlike the pixel value in images, pose data is diverse at scale because of the different acquisition equipment and the data processing, which makes it hard to set fixed parameters to perform attacks. To solve the problems above, we propose a new adversarial attack method that perturbs the input human motion sequence by maximizing the prediction error with physical constraints. Specifically, we introduce a novel adaptable scheme that facilitates the attack to suit the scale of the target pose and two physical constraints to enhance the imperceptibility of the adversarial example. The evaluating experiments on three datasets show that the prediction errors of all target models are enlarged significantly, which means current convolution-based human motion prediction models can be easily disturbed under the proposed attack. The quantitative analysis shows that prior knowledge and semantic information modeling can be the key to the adversarial robustness of human motion predictors. The qualitative results indicate that the adversarial sample is hard to be noticed when compared frame by frame but is relatively easy to be detected when the sample is animated.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/03/2017

Improving Network Robustness against Adversarial Attacks with Compact Convolution

Though Convolutional Neural Networks (CNNs) have surpassed human-level p...
research
04/11/2023

Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection

Adversarial attacks in the physical world can harm the robustness of det...
research
05/22/2023

Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks

Physical world adversarial attack is a highly practical and threatening ...
research
06/02/2022

Adversarial Laser Spot: Robust and Covert Physical Adversarial Attack to DNNs

Most existing deep neural networks (DNNs) are easily disturbed by slight...
research
10/26/2022

LP-BFGS attack: An adversarial attack based on the Hessian with limited pixels

Deep neural networks are vulnerable to adversarial attacks. Most white-b...
research
01/15/2018

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Machine learning systems based on deep neural networks, being able to pr...
research
08/24/2021

Are socially-aware trajectory prediction models really socially-aware?

Our field has recently witnessed an arms race of neural network-based tr...

Please sign up or login with your details

Forgot password? Click here to reset