Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS

Cyber attacks are increasing in volume, frequency, and complexity. In response, the security community is looking toward fully automating cyber defense systems using machine learning. However, so far the resultant effects on the coevolutionary dynamics of attackers and defenders have not been examined. In this whitepaper, we hypothesise that increased automation on both sides will accelerate the coevolutionary cycle, thus begging the question of whether there are any resultant fixed points, and how they are characterised. Working within the threat model of Locked Shields, Europe's largest cyberdefense exercise, we study blackbox adversarial attacks on network classifiers. Given already existing attack capabilities, we question the utility of optimal evasion attack frameworks based on minimal evasion distances. Instead, we suggest a novel reinforcement learning setting that can be used to efficiently generate arbitrary adversarial perturbations. We then argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions, and introduce a temporally extended multi-agent reinforcement learning framework in which the resultant dynamics can be studied. We hypothesise that one plausible fixed point of AI-NIDS may be a scenario where the defense strategy relies heavily on whitelisted feature flow subspaces. Finally, we demonstrate that a continual learning approach is required to study attacker-defender dynamics in temporally extended general-sum games.

READ FULL TEXT
research
02/20/2020

NAttack! Adversarial Attacks to bypass a GAN based classifier trained to detect Network intrusion

With the recent developments in artificial intelligence and machine lear...
research
07/15/2023

Efficient Adversarial Attacks on Online Multi-agent Reinforcement Learning

Due to the broad range of applications of multi-agent reinforcement lear...
research
07/20/2020

Multi-agent Reinforcement Learning in Bayesian Stackelberg Markov Games for Adaptive Moving Target Defense

The field of cybersecurity has mostly been a cat-and-mouse game with the...
research
01/11/2023

SoK: Adversarial Machine Learning Attacks and Defences in Multi-Agent Reinforcement Learning

Multi-Agent Reinforcement Learning (MARL) is vulnerable to Adversarial M...
research
10/13/2018

Two Can Play That Game: An Adversarial Evaluation of a Cyber-alert Inspection System

Cyber-security is an important societal concern. Cyber-attacks have incr...
research
06/11/2020

Robustness to Adversarial Attacks in Learning-Enabled Controllers

Learning-enabled controllers used in cyber-physical systems (CPS) are kn...
research
08/31/2021

Incorporating Deception into CyberBattleSim for Autonomous Defense

Deceptive elements, including honeypots and decoys, were incorporated in...

Please sign up or login with your details

Forgot password? Click here to reset