Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series

01/13/2021
by   Pradeep Rathore, et al.
0

Deep learning based models are vulnerable to adversarial attacks. These attacks can be much more harmful in case of targeted attacks, where an attacker tries not only to fool the deep learning model, but also to misguide the model to predict a specific class. Such targeted and untargeted attacks are specifically tailored for an individual sample and require addition of an imperceptible noise to the sample. In contrast, universal adversarial attack calculates a special imperceptible noise which can be added to any sample of the given dataset so that, the deep learning model is forced to predict a wrong class. To the best of our knowledge these targeted and universal attacks on time series data have not been studied in any of the previous works. In this work, we have performed untargeted, targeted and universal adversarial attacks on UCR time series datasets. Our results show that deep learning based time series classification models are vulnerable to these attacks. We also show that universal adversarial attacks have good generalization property as it need only a fraction of the training data. We have also performed adversarial training based adversarial defense. Our results show that models trained adversarially using Fast gradient sign method (FGSM), a single step attack, are able to defend against FGSM as well as Basic iterative method (BIM), a popular iterative attack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/09/2023

On the Susceptibility and Robustness of Time Series Models through Adversarial Attack and Defense

Under adversarial attacks, time series regression and classification are...
research
01/27/2023

Targeted Attacks on Timeseries Forecasting

Real-world deep learning models developed for Time Series Forecasting ar...
research
11/15/2022

Backdoor Attacks on Time Series: A Generative Approach

Backdoor attacks have emerged as one of the major security threats to de...
research
09/02/2022

Universal Fourier Attack for Time Series

A wide variety of adversarial attacks have been proposed and explored us...
research
08/01/2019

Robustifying deep networks for image segmentation

Purpose: The purpose of this study is to investigate the robustness of a...
research
10/15/2019

Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation

Exploring adversarial attack vectors and studying their effects on machi...
research
05/18/2020

Universalization of any adversarial attack using very few test examples

Deep learning models are known to be vulnerable not only to input-depend...

Please sign up or login with your details

Forgot password? Click here to reset