Adversarial Resilience Learning - Towards Systemic Vulnerability Analysis for Large and Complex Systems

11/15/2018
by   Lars Fischer, et al.
0

This paper introduces Adversarial Resilience Learning (ARL), a concept to model, train, and analyze artificial neural networks as representations of competitive agents in highly complex systems. In our examples, the agents normally take the roles of attackers or defenders that aim at worsening or improving-or keeping, respectively-defined performance indicators of the system. Our concept provides adaptive, repeatable, actor-based testing with a chance of detecting previously unknown attack vectors. We provide the constitutive nomenclature of ARL and, based on it, the description of experimental setups and results of a preliminary implementation of ARL in simulated power systems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset