DeepAI AI Chat
Log In Sign Up

TSFool: Crafting High-quality Adversarial Time Series through Multi-objective Optimization to Fool Recurrent Neural Network Classifiers

by   Yanyun Wang, et al.
The University of Hong Kong
East China Normal University

Deep neural network (DNN) classifiers are vulnerable to adversarial attacks. Although the existing gradient-based attacks have achieved good performance in feed-forward model and image recognition tasks, the extension for time series classification in the recurrent neural network (RNN) remains a dilemma, because the cyclical structure of RNN prevents direct model differentiation and the visual sensitivity to perturbations of time series data challenges the traditional local optimization objective to minimize perturbation. In this paper, an efficient and widely applicable approach called TSFool for crafting high-quality adversarial time series for the RNN classifier is proposed. We propose a novel global optimization objective named Camouflage Coefficient to consider how well the adversarial samples hide in class clusters, and accordingly redefine the high-quality adversarial attack as a multi-objective optimization problem. We also propose a new idea to use intervalized weighted finite automata (IWFA) to capture deeply embedded vulnerable samples having otherness between features and latent manifold to guide the approximation to the optimization solution. Experiments on 22 UCR datasets are conducted to confirm that TSFool is a widely effective, efficient and high-quality approach with 93.22 times speedup to existing methods.


On the Susceptibility and Robustness of Time Series Models through Adversarial Attack and Defense

Under adversarial attacks, time series regression and classification are...

Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation

Exploring adversarial attack vectors and studying their effects on machi...

Adversarial Attacks on Multivariate Time Series

Classification models for the multivariate time series have gained signi...

Adversarial Attacks on Time Series

Time series classification models have been garnering significant import...

Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks

Many existing deep learning models are vulnerable to adversarial example...

Verification of Recurrent Neural Networks Through Rule Extraction

The verification problem for neural networks is verifying whether a neur...

Deep Recurrent Q-learning for Energy-constrained Coverage with a Mobile Robot

In this paper, we study the problem of coverage of an environment with a...