Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems

01/26/2021
by   Utku Ozbulak, et al.
1

Given their substantial success in addressing a wide range of computer vision challenges, Convolutional Neural Networks (CNNs) are increasingly being used in smart home applications, with many of these applications relying on the automatic recognition of human activities. In this context, low-power radar devices have recently gained in popularity as recording sensors, given that the usage of these devices allows mitigating a number of privacy concerns, a key issue when making use of conventional video cameras. Another concern that is often cited when designing smart home applications is the resilience of these applications against cyberattacks. It is, for instance, well-known that the combination of images and CNNs is vulnerable against adversarial examples, mischievous data points that force machine learning models to generate wrong classifications during testing time. In this paper, we investigate the vulnerability of radar-based CNNs to adversarial attacks, and where these radar-based CNNs have been designed to recognize human gestures. Through experiments with four unique threat models, we show that radar-based CNNs are susceptible to both white- and black-box adversarial attacks. We also expose the existence of an extreme adversarial attack case, where it is possible to change the prediction made by the radar-based CNNs by only perturbing the padding of the inputs, without touching the frames where the action itself occurs. Moreover, we observe that gradient-based attacks exercise perturbation not randomly, but on important features of the input data. We highlight these important features by making use of Grad-CAM, a popular neural network interpretability method, hereby showing the connection between adversarial perturbation and prediction interpretability.

READ FULL TEXT

page 1

page 2

page 15

page 18

page 19

page 22

research
04/04/2022

RobustSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition

Deep neural networks have empowered accurate device-free human activity ...
research
02/08/2017

Adversarial Attacks on Neural Network Policies

Machine learning classifiers are known to be vulnerable to inputs malici...
research
01/26/2021

SkeletonVis: Interactive Visualization for Understanding Adversarial Attacks on Human Action Recognition Models

Skeleton-based human action recognition technologies are increasingly us...
research
02/25/2019

Adversarial attacks hidden in plain sight

Convolutional neural networks have been used to achieve a string of succ...
research
06/13/2020

Defensive Approximation: Enhancing CNNs Security through Approximate Computing

In the past few years, an increasing number of machine-learning and deep...
research
08/08/2023

Application for White Spot Syndrome Virus (WSSV) Monitoring using Edge Machine Learning

The aquaculture industry, strongly reliant on shrimp exports, faces chal...

Please sign up or login with your details

Forgot password? Click here to reset