Wasserstein Adversarial Examples on Univariant Time Series Data

03/22/2023
by   Wenjie Wang, et al.
0

Adversarial examples are crafted by adding indistinguishable perturbations to normal examples in order to fool a well-trained deep learning model to misclassify. In the context of computer vision, this notion of indistinguishability is typically bounded by L_∞ or other norms. However, these norms are not appropriate for measuring indistinguishiability for time series data. In this work, we propose adversarial examples in the Wasserstein space for time series data for the first time and utilize Wasserstein distance to bound the perturbation between normal examples and adversarial examples. We introduce Wasserstein projected gradient descent (WPGD), an adversarial attack method for perturbing univariant time series data. We leverage the closed-form solution of Wasserstein distance in the 1D space to calculate the projection step of WPGD efficiently with the gradient descent method. We further propose a two-step projection so that the search of adversarial examples in the Wasserstein space is guided and constrained by Euclidean norms to yield more effective and imperceptible perturbations. We empirically evaluate the proposed attack on several time series datasets in the healthcare domain. Extensive results demonstrate that the Wasserstein attack is powerful and can successfully attack most of the target classifiers with a high attack success rate. To better study the nature of Wasserstein adversarial example, we evaluate a strong defense mechanism named Wasserstein smoothing for potential certified robustness defense. Although the defense can achieve some accuracy gain, it still has limitations in many cases and leaves space for developing a stronger certified robustness method to Wasserstein adversarial examples on univariant time series data.

READ FULL TEXT

page 1

page 5

page 8

research
10/13/2021

A Framework for Verification of Wasserstein Adversarial Robustness

Machine learning image classifiers are susceptible to adversarial and co...
research
05/10/2021

Adversarial examples attack based on random warm restart mechanism and improved Nesterov momentum

The deep learning algorithm has achieved great success in the field of c...
research
03/13/2021

Internal Wasserstein Distance for Adversarial Attack and Defense

Deep neural networks (DNNs) are vulnerable to adversarial examples that ...
research
07/09/2022

Adversarial Framework with Certified Robustness for Time-Series Domain via Statistical Features

Time-series data arises in many real-world applications (e.g., mobile he...
research
08/06/2020

Stronger and Faster Wasserstein Adversarial Attacks

Deep models, while being extremely flexible and accurate, are surprising...
research
08/30/2020

Benchmarking adversarial attacks and defenses for time-series data

The adversarial vulnerability of deep networks has spurred the interest ...
research
09/01/2023

Curating Naturally Adversarial Datasets for Trustworthy AI in Healthcare

Deep learning models have shown promising predictive accuracy for time-s...

Please sign up or login with your details

Forgot password? Click here to reset