Towards Adversarial-Resilient Deep Neural Networks for False Data Injection Attack Detection in Power Grids

02/17/2021
by   Jiangnan Li, et al.
0

False data injection attack (FDIA) is a critical security issue in power system state estimation. In recent years, machine learning (ML) techniques, especially deep neural networks (DNNs), have been proposed in the literature for FDIA detection. However, they have not considered the risk of adversarial attacks, which were shown to be threatening to DNN's reliability in different ML applications. In this paper, we evaluate the vulnerability of DNNs used for FDIA detection through adversarial attacks and study the defensive approaches. We analyze several representative adversarial defense mechanisms and demonstrate that they have intrinsic limitations in FDIA detection. We then design an adversarial-resilient DNN detection framework for FDIA by introducing random input padding in both the training and inference phases. Extensive simulations based on an IEEE standard power system show that our framework greatly reduces the effectiveness of adversarial attacks while having little impact on the detection performance of the DNNs.

READ FULL TEXT

page 1

page 9

research
01/29/2023

Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid

Deep Neural Networks have proven to be highly accurate at a variety of t...
research
05/05/2021

Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks

From tiny pacemaker chips to aircraft collision avoidance systems, the s...
research
12/20/2022

Multi-head Uncertainty Inference for Adversarial Attack Detection

Deep neural networks (DNNs) are sensitive and susceptible to tiny pertur...
research
08/04/2022

On False Data Injection Attack against Building Automation Systems

KNX is one of the most popular protocols for a building automation syste...
research
07/31/2020

Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases

When the training data are maliciously tampered, the predictions of the ...
research
08/31/2021

Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning

Recently published attacks against deep neural networks (DNNs) have stre...
research
08/12/2023

Not So Robust After All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks

Deep neural networks (DNNs) have gained prominence in various applicatio...

Please sign up or login with your details

Forgot password? Click here to reset