On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks

07/14/2023
by   Hafsa Bousbiat, et al.
0

Non-intrusive Load Monitoring (NILM) algorithms, commonly referred to as load disaggregation algorithms, are fundamental tools for effective energy management. Despite the success of deep models in load disaggregation, they face various challenges, particularly those pertaining to privacy and security. This paper investigates the sensitivity of prominent deep NILM baselines to adversarial attacks, which have proven to be a significant threat in domains such as computer vision and speech recognition. Adversarial attacks entail the introduction of imperceptible noise into the input data with the aim of misleading the neural network into generating erroneous outputs. We investigate the Fast Gradient Sign Method (FGSM), a well-known adversarial attack, to perturb the input sequences fed into two commonly employed CNN-based NILM baselines: the Sequence-to-Sequence (S2S) and Sequence-to-Point (S2P) models. Our findings provide compelling evidence for the vulnerability of these models, particularly the S2P model which exhibits an average decline of 20% in the F1-score even with small amounts of noise. Such weakness has the potential to generate profound implications for energy management systems in residential and industrial sectors reliant on NILM models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2023

State-of-the-art optical-based physical adversarial attacks for deep learning computer vision systems

Adversarial attacks can mislead deep learning models to make false predi...
research
05/19/2022

Defending Against Adversarial Attacks by Energy Storage Facility

Adversarial attacks on data-driven algorithms applied in pow-er system w...
research
05/27/2023

Adversarial Attack On Yolov5 For Traffic And Road Sign Detection

This paper implements and investigates popular adversarial attacks on th...
research
09/10/2023

Machine Translation Models Stand Strong in the Face of Adversarial Attacks

Adversarial attacks expose vulnerabilities of deep learning models by in...
research
08/25/2018

Analysis of adversarial attacks against CNN-based image forgery detectors

With the ubiquitous diffusion of social networks, images are becoming a ...
research
10/13/2021

Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning

Graph neural network (GNN) models have achieved great success on graph r...
research
04/06/2022

Dimensionality Expansion and Transfer Learning for Next Generation Energy Management Systems

Electrical management systems (EMS) are playing a central role in enabli...

Please sign up or login with your details

Forgot password? Click here to reset