DeepAI AI Chat
Log In Sign Up

Adversarial Attacks on Machinery Fault Diagnosis

by   Jiahao Chen, et al.

Despite the great progress of neural network-based (NN-based) machinery fault diagnosis methods, their robustness has been largely neglected, for they can be easily fooled through adding imperceptible perturbation to the input. For fault diagnosis problems, in this paper, we reformulate various adversarial attacks and intensively investigate them under untargeted and targeted conditions. Experimental results on six typical NN-based models show that accuracies of the models are greatly reduced by adding small perturbations. We further propose a simple, efficient and universal scheme to protect the victim models. This work provides an in-depth look at adversarial examples of machinery vibration signals for developing protection methods against adversarial attack and improving the robustness of NN-based models.


page 1

page 2

page 3

page 4


Adversarial attacks on audio source separation

Despite the excellent performance of neural-network-based audio source s...

Generating Adversarial Inputs Using A Black-box Differential Technique

Neural Networks (NNs) are known to be vulnerable to adversarial attacks....

Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network

Deep learning (DL) is becoming popular as a new tool for many applicatio...

Biologically inspired protection of deep networks from adversarial attacks

Inspired by biophysical principles underlying nonlinear dendritic comput...

SIENA: Stochastic Multi-Expert Neural Patcher

Neural network (NN) models that are solely trained to maximize the likel...

Reliable Classification Explanations via Adversarial Attacks on Robust Networks

Neural Networks (NNs) have been found vulnerable to a class of impercept...

Code Repositories


The code for

view repo