System Component-Level Self-Adaptations for Security via Bayesian Games

by   Mingyue Zhang, et al.
Peking University

Security attacks present unique challenges to self-adaptive system design due to the adversarial nature of the environment. However, modeling the system as a single player, as done in prior works in security domain, is insufficient for the system under partial compromise and for the design of fine-grained defensive strategies where the rest of the system with autonomy can cooperate to mitigate the impact of attacks. To deal with such issues, we propose a new self-adaptive framework incorporating Bayesian game and model the defender (i.e., the system) at the granularity of components in system architecture. The system architecture model is translated into a Bayesian multi-player game, where each component is modeled as an independent player while security attacks are encoded as variant types for the components. The defensive strategy for the system is dynamically computed by solving the pure equilibrium to achieve the best possible system utility, improving the resiliency of the system against security attacks.



page 1

page 2

page 3


A Game-Theoretical Self-Adaptation Framework for Securing Software-Intensive Systems

The increasing prevalence of security attacks on software-intensive syst...

LP Formulations of Two-Player Zero-Sum Stochastic Bayesian games

This paper studies two-player zero-sum stochastic Bayesian games where e...

Characterizing the interplay between information and strength in Blotto games

In this paper, we investigate informational asymmetries in the Colonel B...

A Game of Martingales

We consider a two player dynamic game played over T ≤∞ periods. In each ...

Playing Stackelberg Opinion Optimization with Randomized Algorithms for Combinatorial Strategies

From a perspective of designing or engineering for opinion formation gam...

Dealing with Adversarial Player Strategies in the Neural Network Game iNNk through Ensemble Learning

Applying neural network (NN) methods in games can lead to various new an...

Security Strategies of Both Players in Asymmetric Information Zero-Sum Stochastic Games with an Informed Controller

This paper considers a zero-sum two-player asymmetric information stocha...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

A self-adaptive system is designed to be capable of modifying its structure and behavior at run time in response to changes in its environment and the system itself [6, 7]. Achieving security in presence of uncertainty is particularly challenging due to the adversarial nature of the environment [10, 8]

. Various game-theory approaches have been explored in the security domain for modeling interactions between the system and attackers as a

game between a group of players (i.e., system and multiple attackers, each as one player) and computing Nash equilibrium strategies for the system to minimize the impact of possible attacks [23, 9, 11, 14]. These methods can be used to (1) model adversarial behaviors by malicious attackers [11, 19], and (2) design reliable defense for the system by using underlying incentive mechanisms to balance perceived risks in a mathematically grounded manner [9, 21]. Prior works in security relying on game theory approaches [23, 9, 11, 14] have treated the system as an independent player (i.e., defender). Abstracting the entire system (i.e., monolithic modeling) applies to the design of defense strategies at the system level. However, a potential attacker or several attackers usually attack the system by exploiting the vulnerabilities spread over different parts of the system. Such monolithic modeling is insufficient for capturing the situations where only a part of the system is compromised while other parts of the system, with their autonomy and capability, can mitigate the impact of the on-going attacks and compensate for security losses.

In this work, we argue that compared to a coarse one-player abstraction of the complex system, modeling the defender under security attacks at the granularity of components is more expressive compared to monolithic modeling, in that it allows the design of fine-grained defensive strategies for the system under partial compromise. Our approach to the component modeling approach is a trade-off between the level of details and level of abstraction to appropriately portray aforementioned attack situations. Furthermore, we advocate focusing on the system modeling by encoding the on-going attacks on the component as component behavior deviations, as an alternative way instead of modeling attackers themselves as separate players.

To this end, we pioneer a new self-adaptive framework that leverages Bayesian games at the granularity of components

at the system architecture level. Specifically, each essential component will be separately modeled as a player. Under attacks, one or more components with vulnerabilities might be exploited with probability by the attackers to deliberately perform harmful actions (i.e., turning into a malicious type). The various security attacks these components might be subject to are encoded as different

types for players, the way of expressing uncertainty from the Bayesian approach. The rest of the components could form a coalition to fight against those potentially uncooperative components. The architecture model of the system and the security attacks on components are translated into a Bayesian game structure. Then, the adaptive defensive strategy for the system is dynamically computed by solving a pure equilibrium, to achieve the best possible system utility under all assignments of the components to their possible types (i.e., in the presence of security attacks).

Ii Bayesian Game Extended MAPE-K Loop

We propose a new self-adaptive framework incorporating Bayesian Game. Adaptation behaviors build on the Nash equilibrium from unexpected attacks and are achieved by elaborating the widely adopted mechanism of the MAPE-K (Monitoring, Analysis, Planning, Execution, Knowledge) loop [13, 24, 2], shown in Figure 1. Concretely, Knowledge Base stores the necessary information for the sake of self-adaptation, including (1) the component and connector model of the managed subsystem and its action space for each component, (2) system objectives usually defined as the quality attributes quantified by the utility, and (3) component vulnerabilities with potential behavior deviations that can be exploited by the potential attacks. Monitor gathers and synthesizes the on-going attacks information through sensors and saves information in the Knowledge Base. Analyzer

performs analysis and further checks whether certain components are attacked with probabilities; potential deviated malicious actions are identified; the rewards for the attack are estimated, based on the knowledge about component vulnerabilities and system objectives.

Planner generates one or a set of adaptation actions by automatically solving the Bayesian Game transformed with the input of potential attacks from the Analyzer and architectural model of the managed subsystem along with the system objectives from the Knowledge Base. Then, adaptations from equilibrium are enacted by Executor on the managed subsystem through actuators.

Fig. 1: Self-Adaptive Framework.

Iii Bayesian Game via Model Transformation

We define the system under attacks, and transform the system architecture with on-going attacks into a component-based Bayesian game. Solving the game with equilibrium is to find the adaptation strategy.

Component-based System: A system component is an independent and replaceable part of a system that fulfills a clear function in the context of a well-defined architecture. Components forming architectural structures will affect different quality attributes. A system is defined as , where is a set of components; is a set of joint actions available to component ; is a set of quality attributes a system is interested in. Each component is trying to make the right reaction to maximize the system utility. Naturally, a system under normal operation could be viewed as a cooperative game dealing with how coalitions interact [4, 3].

Modeling Utility as Payoff: The payoff among those players is allocated by the utility from quality attributes. It is straightforward for developers to design a system-level utility. However, due to the different roles of the components and the complex relationship between them, it is complicated and sometimes untraceable to manually design an appropriate component-level payoff function. The Shapley value, a solution of fairly distributing both gains and costs to several players working in coalition proportional to their marginal contributions [22, 20, 15], is used to automatically decompose the system-level utility into the component-level payoff.

Component-based Attacks: Instead of modeling an attacker or several attackers with possible complex behaviors over different parts of the system, we model the on-going attacks the system is enduring at the component level since the vulnerabilities of the components as well as their potential behavior deviations are comparatively easy to observe and be analyzed. The security attacks on the system is formally defined as a tuple , where is the set of components affected by the attacks; denotes a set of joint actions controlled by attacks on compromised components; is a set of probability where is the probability of component being successfully compromised; is the reward for attacks.

Translation into a Bayesian game: With the definition of the system on the component level and the definition of the attacks, a system under security attacks is converted into a Bayesian game , where is a set of players; is a set of actions; is a set of types for each player ; is a payoff function for each player determined by the types of all players and actions they choose;

is a probability distribution

over types.

The game translation follows five steps: 1) each component is separately modeled as an independent player; 2) components potentially affected by attacks will be associated with two types (i.e, normal and malicious) while the remaining components are normal type; 3) the probability distribution for a player over two types is as defined in ; 4) the action space of player under security attacks is ; 5) the payoff for players in normal type will be allocated with system utility by the shapley value method, while components in malicious type performing harmful actions will be assigned with utility the on-going attacks obtain by achieving their own goals. The game constructed is put into a game solver (i.e., Gambit [18]), to find Nash equilibria, which, in essence, is the best reaction as the adaptation response for the system to potential attacks.

Iv Conclusion and Future Work

In this paper, we have proposed a new framework for self-adaptive systems by adopting Bayesian game theory and modeled the system under security attacks as a multi-player game at system architecture level. Its applicability and superiority have been demonstrated in a security web scenario with load balancing and a case study on an inter-domain routing application 111 In future, we are planning to evaluate our framework in a realistic industrial control system with adaptive behaviors [16, 17, 5, 1] by constructing the game in an automated way and supporting Architecture Description Interchange Language, such as acme [12].


This work is supported by the National Natural Science Foundation of China under Grant No.61620106007 and No.61751210.


  • [1] S. Adepu, F. Brasser, L. Garcia, M. Rodler, L. Davi, A. Sadeghi, and S. Zonouz (2020) Control behavior integrity for distributed cyber-physical systems. In 2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems (ICCPS), pp. 30–40. Cited by: §IV.
  • [2] V. Braberman, N. D’Ippolito, J. Kramer, D. Sykes, and S. Uchitel (2015) Morph: a reference architecture for configuration and behaviour self-adaptation. In Proceedings of the 1st International Workshop on Control Theory for Software Engineering, pp. 9–16. Cited by: §II.
  • [3] J. Cámara, D. Garlan, G. A. Moreno, and B. R. Schmerl (2013) Analyzing self-adaptation via model checking of stochastic games. In Software Engineering for Self-Adaptive Systems III. Assurances - International Seminar, Revised Selected and Invited Papers, Lecture Notes in Computer Science, Vol. 9640, pp. 154–187. Cited by: §III.
  • [4] J. Cámara, G. A. Moreno, D. Garlan, and B. R. Schmerl (2016) Analyzing latency-aware self-adaptation using stochastic games and simulations. ACM Trans. Auton. Adapt. Syst. 10 (4), pp. 23:1–23:28. Cited by: §III.
  • [5] Y. Chen, C. M. Poskitt, J. Sun, S. Adepu, and F. Zhang (2019) Learning-guided network fuzzing for testing cyber-physical system defences. In 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 962–973. Cited by: §IV.
  • [6] B. H. C. Cheng and et al. (2009) Software engineering for self-adaptive systems: A research roadmap. In Software Engineering for Self-Adaptive Systems [outcome of a Dagstuhl Seminar], pp. 1–26. Cited by: §I.
  • [7] R. de Lemos and et al. (2010) Software engineering for self-adaptive systems: A second research roadmap. In Software Engineering for Self-Adaptive Systems II - International Seminar, Dagstuhl Castle, Germany, October 24-29, 2010 Revised Selected and Invited Papers, pp. 1–32. Cited by: §I.
  • [8] P. T. Devanbu and S. G. Stubblebine (2000) Software engineering for security: a roadmap. In 22nd International Conference on on Software Engineering, Future of Software Engineering Track, ICSE 2000, Limerick Ireland, June 4-11, 2000, pp. 227–239. Cited by: §I.
  • [9] C. T. Do, N. H. Tran, C. S. Hong, C. A. Kamhoua, K. A. Kwiat, E. Blasch, S. Ren, N. Pissinou, and S. S. Iyengar (2017) Game theory for cyber security and privacy. ACM Comput. Surv. 50 (2), pp. 30:1–30:37. External Links: Link, Document Cited by: §I.
  • [10] A. M. Elkhodary and J. Whittle (2007) A survey of approaches to adaptive application security. In 2007 ICSE Workshop on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2007, Minneapolis Minnesota, USA, May 20-26, 2007, pp. 16. Cited by: §I.
  • [11] S. Farhang and J. Grossklags (2016) FlipLeakage: A game-theoretic approach to protect against stealthy attackers in the presence of information leakage. In Decision and Game Theory for Security - 7th International Conference, GameSec 2016, New York, NY, USA, November 2-4, 2016, Proceedings, pp. 195–214. External Links: Link, Document Cited by: §I.
  • [12] D. Garlan, R. T. Monroe, and D. Wile (1997) Acme: an architecture description interchange language. In Proceedings of the 1997 conference of the Centre for Advanced Studies on Collaborative Research, November 10-13, 1997, Toronto, Ontario, Canada, pp. 7. External Links: Link Cited by: §IV.
  • [13] J. O. Kephart and D. M. Chess (2003) The vision of autonomic computing. IEEE Computer 36 (1), pp. 41–50. External Links: Link, Document Cited by: §II.
  • [14] C. Kinneer, R. Wagner, F. Fang, C. Le Goues, and D. Garlan (2019) Modeling observability in adaptive systems to defend against advanced persistent threats. In Proceedings of the 17th ACM-IEEE International Conference on Formal Methods and Models for System Design, MEMOCODE 2019, La Jolla, CA, USA, October 9-11, 2019, pp. 10:1–10:11. Cited by: §I.
  • [15] C. Levinger, N. Hazon, and A. Azaria (2020) Computing the shapley value for ride-sharing and routing games. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1895–1897. Cited by: §III.
  • [16] A. P. Mathur and N. O. Tippenhauer (2016-04) SWaT: A water treatment testbed for research and training on ICS security. In 2016 International Workshop on Cyber-physical Systems for Smart Water Networks (CySWater), pp. 31–36. Cited by: §IV.
  • [17] A. Maw, S. Adepu, and A. Mathur (2019) ICS-BlockOpS: blockchain for operational data security in industrial control system. Pervasive and Mobile Computing 59, pp. 101048. Cited by: §IV.
  • [18] R. D. McKelvey, A. M. McLennan, and T. L. Turocy (2018-02) Gambit: software tools for game theory, version 16.0.1. Note: Cited by: §III.
  • [19] S. Moothedath, D. Sahabandu, J. Allen, A. Clark, L. Bushnell, W. Lee, and R. Poovendran (2020) A game-theoretic approach for dynamic information flow tracking to detect multi-stage advanced persistent threats. IEEE Transactions on Automatic Control. Cited by: §I.
  • [20] M. J. Osborne and A. Rubinstein (1994) A course in game theory. MIT Press Books 1. Cited by: §III.
  • [21] J. Pawlick, E. Colbert, and Q. Zhu (2019) A game-theoretic taxonomy and survey of defensive deception for cybersecurity and privacy. ACM Comput. Surv. 52 (4), pp. 82:1–82:28. External Links: Document Cited by: §I.
  • [22] L. S. Shapley (1953) A value for n-person games. In Contributions to the Theory of Games vol. 2. Cited by: §III.
  • [23] M. Tambe (2012) Security and game theory - algorithms, deployed systems, lessons learned. Cambridge University Press. External Links: ISBN 978-1-10-709642-4 Cited by: §I.
  • [24] D. Weyns, M. U. Iftikhar, and J. Söderlund (2013) Do external feedback loops improve the design of self-adaptive systems? a controlled experiment. In Proceedings of the 8th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2013, San Francisco, CA, USA, May 20-21, 2013, pp. 3–12. Cited by: §II.