Informing Autonomous Deception Systems with Cyber Expert Performance Data

08/31/2021
by   Maxine Major, et al.
0

The performance of artificial intelligence (AI) algorithms in practice depends on the realism and correctness of the data, models, and feedback (labels or rewards) provided to the algorithm. This paper discusses methods for improving the realism and ecological validity of AI used for autonomous cyber defense by exploring the potential to use Inverse Reinforcement Learning (IRL) to gain insight into attacker actions, utilities of those actions, and ultimately decision points which cyber deception could thwart. The Tularosa study, as one example, provides experimental data of real-world techniques and tools commonly used by attackers, from which core data vectors can be leveraged to inform an autonomous cyber defense system.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2019

Use of Artificial Intelligence Techniques / Applications in Cyber Defense

Nowadays, considering the speed of the processes and the amount of data ...
research
12/30/2022

An Analysis of Honeypots and their Impact as a Cyber Deception Tactic

This paper explores deploying a cyber honeypot system to learn how cyber...
research
07/20/2023

Battle Ground: Data Collection and Labeling of CTF Games to Understand Human Cyber Operators

Industry standard frameworks are now widespread for labeling the high-le...
research
04/20/2021

Network Defense is Not a Game

Research seeks to apply Artificial Intelligence (AI) to scale and extend...
research
08/31/2021

Incorporating Deception into CyberBattleSim for Autonomous Defense

Deceptive elements, including honeypots and decoys, were incorporated in...
research
01/05/2022

AI for Beyond 5G Networks: A Cyber-Security Defense or Offense Enabler?

Artificial Intelligence (AI) is envisioned to play a pivotal role in emp...

Please sign up or login with your details

Forgot password? Click here to reset