Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks

02/07/2020
by   Moshe Kravchik, et al.
0

In recent years, a variety of effective neural network-based methods for anomaly and cyber attack detection in industrial control systems (ICSs) have been demonstrated in the literature. Given their successful implementation and widespread use, there is a need to study adversarial attacks on such detection methods to better protect the systems that depend upon them. The extensive research performed on adversarial attacks on image and malware classification has little relevance to the physical system state prediction domain, which most of the ICS attack detection systems belong to. Moreover, such detection systems are typically retrained using new data collected from the monitored system, thus the threat of adversarial data poisoning is significant, however this threat has not yet been addressed by the research community. In this paper, we present the first study focused on poisoning attacks on online-trained autoencoder-based attack detectors. We propose two algorithms for generating poison samples, an interpolation-based algorithm and a back-gradient optimization-based algorithm, which we evaluate on both synthetic and real-world ICS data. We demonstrate that the proposed algorithms can generate poison samples that cause the target attack to go undetected by the autoencoder detector, however the ability to poison the detector is limited to a small set of attack types and magnitudes. When the poison-generating algorithms are applied to the popular SWaT dataset, we show that the autoencoder detector trained on the physical system state data is resilient to poisoning in the face of all ten of the relevant attacks in the dataset. This finding suggests that neural network-based attack detectors used in the cyber-physical domain are more robust to poisoning than in other problem domains, such as malware detection and image processing.

READ FULL TEXT

page 1

page 10

research
12/23/2020

Poisoning Attacks on Cyber Attack Detectors for Industrial Control Systems

Recently, neural network (NN)-based methods, including autoencoders, hav...
research
05/22/2021

Adversarial Attacks and Mitigation for Anomaly Detectors of Cyber-Physical Systems

The threats faced by cyber-physical systems (CPSs) in critical infrastru...
research
10/01/2022

Adversarial Attacks on Transformers-Based Malware Detectors

Signature-based malware detectors have proven to be insufficient as even...
research
09/02/2020

Flow-based detection and proxy-based evasion of encrypted malware C2 traffic

State of the art deep learning techniques are known to be vulnerable to ...
research
06/17/2023

GlyphNet: Homoglyph domains dataset and detection using attention-based Convolutional Neural Networks

Cyber attacks deceive machines into believing something that does not ex...
research
12/08/2022

PKDGA: A Partial Knowledge-based Domain Generation Algorithm for Botnets

Domain generation algorithms (DGAs) can be categorized into three types:...
research
03/19/2022

Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model

Recently, the problem of robustness of pre-trained language models (PrLM...

Please sign up or login with your details

Forgot password? Click here to reset