Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data

07/29/2020
by   Kai Steverson, et al.
0

There has been considerable and growing interest in applying machine learning for cyber defenses. One promising approach has been to apply natural language processing techniques to analyze logs data for suspicious behavior. A natural question arises to how robust these systems are to adversarial attacks. Defense against sophisticated attack is of particular concern for cyber defenses. In this paper, we develop a testing framework to evaluate adversarial robustness of machine learning cyber defenses, particularly those focused on log data. Our framework uses techniques from deep reinforcement learning and adversarial natural language processing. We validate our framework using a publicly available dataset and demonstrate that our adversarial attack does succeed against the target systems, revealing a potential vulnerability. We apply our framework to analyze the influence of different levels of dropout regularization and find that higher dropout levels increases robustness. Moreover 90 significant margin, which suggests unusually high dropout may be necessary to properly protect against adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2022

Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning

Recent advances in adversarial machine learning have shown that defenses...
research
06/27/2021

Who is Responsible for Adversarial Defense?

We have seen a surge in research aims toward adversarial attacks and def...
research
01/26/2021

Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers

Recently, physical domain adversarial attacks have drawn significant att...
research
12/09/2019

Hardening Random Forest Cyber Detectors Against Adversarial Attacks

Machine learning algorithms are effective in several applications, but t...
research
12/10/2020

An Empirical Review of Adversarial Defenses

From face recognition systems installed in phones to self-driving cars, ...
research
07/08/2022

Not all broken defenses are equal: The dead angles of adversarial accuracy

Robustness to adversarial attack is typically evaluated with adversarial...
research
03/01/2021

Token-Modification Adversarial Attacks for Natural Language Processing: A Survey

There are now many adversarial attacks for natural language processing s...

Please sign up or login with your details

Forgot password? Click here to reset