Reinforcement Learning in FlipIt

by   Laura Greige, et al.
Boston University

Reinforcement learning has shown much success in games such as chess, backgammon and Go. However, in most of these games, agents have full knowledge of the environment at all times. In this paper, we describe a deep learning model that successfully optimizes its score using reinforcement learning in a game with incomplete and imperfect information. We apply our model to FlipIt, a two-player game in which both players, the attacker and the defender, compete for ownership of a shared resource and only receive information on the current state (such as the current owner of the resource, or the time since the opponent last moved, etc.) upon making a move. Our model is a deep neural network combined with Q-learning and is trained to maximize the defender's time of ownership of the resource. Despite the imperfect observations, our model successfully learns an optimal cost-effective counter-strategy and shows the advantages of the use of deep reinforcement learning in game theoretic scenarios. Our results show that it outperforms the Greedy strategy against distributions such as periodic and exponential distributions without any prior knowledge of the opponent's strategy, and we generalize the model to n-player games.


page 1

page 2

page 3

page 4


Suphx: Mastering Mahjong with Deep Reinforcement Learning

Artificial Intelligence (AI) has achieved great success in many domains,...

On a Generic Security Game Model

To protect the systems exposed to the Internet against attacks, a securi...

Learning Policies from Human Data for Skat

Decision-making in large imperfect information games is difficult. Thank...

Playing Adaptively Against Stealthy Opponents: A Reinforcement Learning Strategy for the FlipIt Security Game

A rise in Advanced Persistant Threats (APTs) has introduced a need for r...

QFlip: An Adaptive Reinforcement Learning Strategy for the FlipIt Security Game

A rise in Advanced Persistent Threats (APTs) has introduced a need for r...

Deep Reinforcement Learning for Green Security Games with Real-Time Information

Green Security Games (GSGs) have been proposed and applied to optimize p...

High-Level Strategy Selection under Partial Observability in StarCraft: Brood War

We consider the problem of high-level strategy selection in the adversar...

1 Introduction

Game theory has been commonly used for modeling and solving security problems. When payoff matrices are known by all parties, one can solve the game by calculating the Nash equilibria of the game and by playing one of the corresponding mixed strategies to maximize its gain (or symmetrically, minimize its loss). However, the assumption that the payoff is fully known by all players involved is often too strong to effectively model the type of situations that arise in practice. It is therefore useful to consider the case of incomplete information and apply reinforcement learning methods which are better suited to tackle the problem in these settings. In particular, we examine the two-player game FlipIt [24]

where an attacker and a defender compete over a shared resource and where agents deal with incomplete observability. In this work, we train our model to estimate the opponent’s strategy and to learn the best-response to that strategy. Since these estimations highly depend on the information the model gets throughout the game, the challenge comes from the incomplete and imperfect information received on the state of the game. Our goal is for the adaptive agents to adjust their strategies based on the observations of their opponents’ moves. Good FlipIt strategies will help each player implement their optimal cost-effective schedule.

In this paper, we begin by presenting the motivation behind FlipIt and the previous studies related to the game. We provide a description of the game framework, as well as its variants, and describe our approach to solving the game. We present the results obtained by our model against basic renewal strategies and compare its performance to the Greedy strategy. Then, we generalize the game to an -player game and analyze the results. The last section of this paper is an overview of the future work we would like to explore.

2 Motivation

The principal motivation for the game FlipIt is the rise of Advanced Persistent Threats (APT) [4, 18]. APTs are stealthy and constant computer hacking processes which can compromise a system or a network security and remain undetected for an extended period of time. Such threats include host takeover and compromised security keys. For host takeover, the goal of the attacker is to compromise the device, while the goal of the defender is to keep the device clean through software re-installation or through other defensive precautions. Our goal is to find and implement a cost effective schedule through reinforcement learning. In other words, we would like to learn effectively how often should the defender clean the machines and when will the attacker launch its next attack. For compromised security keys, the problem can similarly be formulated as finding a cost effective schedule. These applications can be modeled by the two-player game FlipIt, in which players, attackers and defenders, vie for control of a shared resource that both players wish to control. The resource could be a computing device or a password for example, depending on which APT is being modeled.

3 Related Work

Although game theory models have been greatly applied to solve cybersecurity problems [9, 1, 6], studies mainly focused on one-shot attacks of known types. FlipIt is the first model that characterizes the persistent and stealthy properties of APTs and was first introduced by van Dijk et al [24]. In their paper, they analyze multiple instances of the game with non-adaptive strategies and show the dominance of certain distributions against stealthy opponents. They also show that the Greedy strategy is dominant over different distributions, such as periodic and exponential distributions, but is not necessarily optimal. Different variants and extensions of the game have also been analyzed; these include games with additional “insider” players trading information to the attacker for monetary gains [7, 5], games with multiple resources [10] and games with different move types [17]. In all these variants, only non-adaptive strategies have been considered and this limits the analysis of the game framework. Laszka et al. [11, 12] were the only ones to propose a study of adaptive strategies in FlipIt, but this was done in a variant of the game where the defender’s moves are non-stealthy and non-instantaneous.

Machine Learning (ML) has been commonly used in different cybersecurity problems such as fraud and malware detection [2, 13], data-privacy protection [27] and cyber-physical attacks [3]. It has allowed the improvement of attacking strategies that can overcome defensive ones, and vice-versa, it has allowed the development of better and more robust defending strategies in order to prevent or minimize the impact of these attacks. Reinforcement Learning (RL) is a particular branch in ML in which an agent interacts with an environment and learns from its own past experience through exploration and exploitation without any prior or with limited knowledge of the environment. RL and the development of deep learning have lead to the introduction of Deep Q-Networks (DQNs) to solve larger and more complex games. DQNs were firstly introduced by Mnih et al. [14] and have since been commonly used for solving games such as backgammon, the game of Go and Atari [21, 19, 15]. They combine deep learning and Q-learning [22, 26] and are trained to learn the best action to perform in a particular state in terms of producing the maximum future cumulative reward. Hence, with the ability of modeling autonomous agents that are capable of making optimal sequential decisions, DQNs represent the perfect model to use in an adversarial environment such as FlipIt. Our paper extends the research made in stealthy security games with the introduction of adaptive DQN-based strategies allowing agents to learn a cost-effective schedule for defensive precautions in -player FlipIt, all in real-time.

4 Game Environment

4.1 Framework

We first begin by introducing the players (i.e. agents) of the game. The agents take control of the resource by moving, or by flipping. Flipping is the only move option available and each agent can flip at any time throughout the game. The agents pay a certain move cost for each flip and are rewarded for time in possession of the resource. An interesting aspect in FlipIt is that contrary to games like Backgammon and Go, agents do not take turn moving. A move can be made at any time throughout the game and therefore an agent’s score highly depends on its opponent’s moves. In addition, players have incomplete information about the game as they only find out about its current state (i.e. previous moves, previous owners of the resource) when they flip. Each agent has several attributes such as the strategy it adopts, the value rewarded per iteration and the cost per flip. For our purpose we have used the same reward and flip cost for all agents, but our environment can be easily generalized in order to experiment with different rewards and costs for both the attacker and defender. Agents also have some memory regarding the previous moves made by the players in the game. Depending on the agent, different types of feedback may be obtained during the game. In the case of non-adaptive agents, no feedback is obtained when they move. In the case of adaptive agents, the agent either finds out about its opponent’s last move (LM), or finds out about the full history (FH) of flips made so far by all players.

The game evolves in either discrete or continuous time. In discrete time, an agent decides whether to flip or not at each iteration depending on its strategy. In continuous time however, an agent decides on the next time to flip. It should be noted that it is more likely to obtain ties in discrete mode. Ties are broken by giving priority to the defender. In continuous mode, there is a lower chance of obtaining ties, but in case they do happen, the same procedure is used. Different stopping conditions have also been implemented. In finite games, we have a predetermined end time for the game, a time that could be known by both players. If the end time is known, the game can be solved by backward induction suggesting that the optimal play is to never flip in the hope of starting the game by being in control of the resource. Another game type of interest is infinite games or games with unknown stopping. In order to simulate infinite games, we select an end time from a geometric distribution with a user-defined parameter

. In continuous time, we use the exponential distribution instead as it is the continuous analog of the geometric distribution. In both cases, the end time variable is itself used as a stopping time and this allows us to simulate infinite games throughout the experimental process. As approaches 0, our randomly stopped game becomes closer to simulating an infinite game while maintaining the time invariance (memorylessness property) of the infinite game.

In the following sections, we consider that the defender is the initial owner of the resource as it usually is the case with security keys and other devices. All results presented were obtained in discrete and finite mode.

4.2 Extensions

So far, we mentioned that FlipIt is a two-player game. However, we extended the game to an

-player game by considering multiple attackers with or without a defender. A few other extensions of the game have also been discussed; in the current version of the game, the only possible action is to flip. An interesting variation would be the addition of a lower cost action, such as checking the state of the game instead of flipping. This would allow agents to obtain some useful feedback on the current state of the game without moving and would allow them to plan their future moves accordingly. Another possibility would be to analyze the game with probabilistic moves; for example, an action could be successful only with some pre-defined probability

, and would fail with probability . Finally, in the current version of the game, agents can flip as many times as they wish. Adding an upper bound on the budget and limiting the number of flips allowed per player would force players to flip efficiently throughout the game.

5 Renewal Games

5.1 Game Strategies

Now that we have defined our framework, we formulate a few different strategies agents can use in order to maximize expected rewards. There are at least two properties we desire from a good strategy. First, we expect the strategy to have some degree of unpredictability. A predictable strategy will be susceptible to exploitation, in that a malignant or duplicitous opponent can strategically select flips according to the predictable flips of the agent. Second, we expect a good strategy to space its flips efficiently. Intuitively, we can see that near simultaneous flips will waste valuable resources without providing proportionate rewards. Such strategy exhibits poor spacing efficiency. Unfortunately, these two properties are generally opposed to each other, and improving performance in one usually requires a loss in the other. In the following sections we will describe the strategies studied and the properties they exhibit.

5.2 Renewal Strategies

A renewal process is a process which selects a renewal time from a probability distribution and repeats at each renewal. For our purpose, a renewal is a flip and our renewal process is 1-dimensional with only a time dimension. In strict definition, renewal processes should be independent from previous selections; however, for more advanced strategies we can modify the concept of our renewal strategies to include probability distributions conditioned upon other flip times or other game factors. With this extended formulation of renewal strategies, we can represent any strategy.

Now we will reexamine our desirable properties in a strategy in the context of renewal strategies. For a renewal strategy, a strategy with the desired property of unpredictability should have a high variance, and a strategy with efficient spacing should have a low variance. However, a high variance or a low variance is not sufficient to achieve either of these properties since high variance strategies can sometimes be quite predictable (such as a bimodal strategy). These variance requirements still demonstrate the necessary and unavoidable trade-off between these two desirable properties. There are two basic strategy types (random exponential strategy and periodic strategy) that we will examine in this paper as they both present different degrees of predictability and spacing efficiency. In Table [

1], the strategies’ renewal distribution, predictability, spacing, parameters and expected time between two flips are all shown.

Exponential Periodic
Parameters Rate Period
Renewal Distribution
Predictability   memoryless, perfectly unpredictable   deterministic, perfectly predictable
Spacing spacing random and inefficient spacing perfectly efficient
Table 1: Basic game strategies and their properties

6 Methodology

6.1 Markov Decision Process

Our environment is defined as a Markov Decision Process (MDP). At each iteration, agents select an action from the set of possible actions

. In FlipIt, the action space is restrained to two actions: to flip and not to flip. As previously mentioned, agents do not always have a correct perception of the current state of the game. In Figure [1] we describe a case where an agent plays with only partial observability. Let be the blue agent and be the red agent. Both players are considered LM. When flips at iteration 1, it receives some information on its opponent’s last move which was at iteration 0. However, it will never know of ’s flip at iteration 2, since the only feedback it receives afterwards is at iteration 6 and it concerns only the opponent’s last move which was at iteration 4. Hence, between iterations 1 and 6, only has partial observation of the game state and imperfect information on the time it controls the resource.



Figure 1: Example of incomplete or imperfect observability by a player on the state of the game.

We define the immediate reward at each iteration based on the action executed as well as the owner of the resource at the previous iteration. Let be the immediate reward received by agent at time-step . We have


where is the reward given for owning the resource, is the flip cost, is 1 if agent is the resource owner at time-step , 0 otherwise and is 1 if the action at time-step is to flip, 0 otherwise. Let be the discount factor. The discount factor determines the importance of future rewards, and in our environment, a correct action at some time step

is not necessarily immediately rewarded. In fact, by having a flip cost higher than a flip reward, an agent is penalized for flipping at the correct moment but is rewarded in future time-steps. This is why we set our discount factor

to be as large as possible, giving more importance to future rewards and making our agent aim for long term high rewards instead of short-term ones.

6.2 Model Architecture

Q-learning is a reinforcement learning algorithm in which an agent or a group of agents try to learn the optimal policy from their past experiences and interactions with an environment. These experiences are a sequence of state-action-rewards. In its simplest form, Q-learning is a table of values for each state (row) and action (column) possible in the environment. Given a current state, the algorithm estimates the value in each table cell, corresponding to how good it is to take this action in this particular state. At each iteration, an estimation is repeatedly made in order to improve the estimations. This process continues until the agent arrives to a terminal state in the environment. This becomes quite inefficient when we have a large number or an unknown number of states in an environment such as FlipIt. Therefore in these situations, larger and more complex implementations of Q-learning have been introduced, in particular, Deep Q-Networks (DQN). Deep Q-Networks were firstly introduced by Mnih et al. (2013) and have since been commonly used for solving games. DQNs are trained to learn the best action to perform in a particular state in terms of producing the maximum future cumulative reward and map state-action pairs to rewards. Our objective is to train our agent such that its policy converges to the theoretical optimal policy that maximizes the future discounted rewards. In other words, given a state we want to find the optimal policy that selects action such that


where is the Q-value that corresponds to the overall expected reward, given the state-action pair . It is defined by

where is the length of the game. The Q-values are updated for each state and action using the following Bellman equation

where and are the new and current Q-values for the state-action pair respectively, is the reward received for taking action at state , is the maximum expected future reward given the new state and all possible actions from state , is the learning rate and the discount factor.

Our model architecture consists of 3 fully connected layers with the ReLU (rectified linear unit) activation function at each layer. It is trained with Q-learning using the PyTorch framework

[16] and optimized using the Adam optimizer [8]. We use experience replay [25] memory to store the history of state transitions and rewards (i.e. experiences) and sample mini-batches from the same experience replay to calculate the Q-values and update our model. The state of the game that is given as input to the neural network corresponds to the agent’s current knowledge on the game, i.e. the time passed since its last move and the time passed since its opponent’s last known move. The output corresponds to the Q-values calculated for each action. We value exploration over exploitation and use an -Greedy algorithm such that at each step a random action is selected with probability and the action corresponding to the highest Q-value is selected with probability . is initially set to 0.6, and is gradually reduced at each time step as the agent becomes more confident at estimating Q-values.

7 Experimental Results

7.1 Basic Strategies

We trained our neural network to learn the opponent’s strategy and to optimally adapt to it. In the following experiments, the reward received for being in control of the resource is set to 1 whereas the cost for flipping is set to 4. The flip cost is purposely set to a higher value than the reward in order to discourage the DQN from flipping at each iteration. Finally, our DQN agent is considered to be LM.

We begin by opposing our DQN to a periodic agent (PA). The final scores obtained at each game by the defender (DQN) and the attacker (playing with a period of 50) as well as the moving average score are plotted in Figure [1(a)]. In general, the optimal strategy against any periodic strategy can be easily found. Since the defender has priority when both players flip, the optimal strategy would be to adopt the same periodic strategy as the opponent’s as it maximizes its final score when the period is greater than the flip cost. As shown in Figure [1(a)], the DQN’s score converges to its maximum possible score. In Figure [1(b)], we plot the average scores of both players after the convergence of their scores with regards to the opponent’s strategy period. The opponent strategy periods range from 10 to 100 with increments of 10. Clearly, the shorter the period between two flips, the lower the average reward is since the optimal strategy suggests flipping more often. Moreover, our model outperforms any periodic opponent and successfully optimizes its score throughout the game.

(a) DQNA against a Periodic Agent (with period = 50)
(b) Average rewards after convergence of DQNA against Periodic Agents
Figure 2: Results obtained when opposing DQNA against a Periodic Agent

We run the same experiments against different agents, in particular, the periodic agent with a random phase (PAwRP) and the exponential agent (EA). The results are plotted in Figures [2(a), 2(b)]. Once more, we plot the average score after convergence of both players with regards to the opponent’s strategy parameters to showcase the performance of the DQN against a range of strategies. It is important to note that it becomes harder to predict an EA’s flip times since the spacing between two consecutive flips is random, and yet our model develops a dominant strategy able to outperform its opponent. In this case, the opponent’s flips do not affect the strategy of the DQN agent; the state of the game is therefore reduced to the time since it last flipped and the time left in the game. In all these experiments, the DQN’s final score increases significantly over time and converges towards the maximum possible score suggesting that the model learns the optimal counter-strategy.

(a) DQNA against Periodic Agents (with a random phase)
(b) DQNA against Exponential Agents
Figure 3: Average rewards after convergence of DQNA

The ultimate goal in FlipIt is to control the resource for the maximum possible time while optimizing the score. Without the second constraint, a “good” strategy would be to flip at each iteration to ensure the control of the resource throughout the game. So far, we have looked at strategies with an expected interval time between two actions to be greater than the flip cost and we have seen that the DQN is able to learn an optimal strategy in these cases. However, when playing against an opponent that we name active agent (AA) with an expected interval that is smaller than the flip cost, we expect our adaptive agent to adopt the same behavior as its opponent to prevent it from taking control of the resource. This is not the case as illustrated in Figure [4]; when opposing the DQN to PAs with respective periods and , the DQN’s optimal strategy is to never flip as it is the only strategy that results in the maximum score in this scenario. The DQN’s score increases over time until it reaches a final score of 0, and the excessive flips from the opponent forces the DQN to “drop out” of the game. This is an interesting behavior as it shows a way of forcing an opponent to drop out of the game.

Figure 4: Examples of a “drop out” behavior

We would like to further explore this behavior and the ways to integrate it in a game where probabilistic moves or changes of strategy at any point throughout the game would be allowed.

7.2 Greedy Strategy

We have implemented the Greedy strategy as described in van Dijk et al. [24]. The Greedy strategy opts to maximize the agent’s local reward. More specifically, an agent adopting the Greedy strategy is LM and receives information on the time passed since its opponent’s last move. With and the probability density of the opponent’s strategy given as inputs, the Greedy strategy outputs the time interval until its next move maximizing its local benefit.

(a) Greedy Agent against PA with respective periods and
(b) Greedy Agent against EA with respective parameters and
Figure 5: Average scores obtained by the Greedy Agent against Periodic and Exponential Agents over 100 games.

We ran simulations of 2-player games opposing the Greedy strategy to periodic and exponential renewal strategies. The final scores obtained at each game (divided by the game length) are plotted in Figure [5]. Despite not having any information on its opponent’s strategy, the DQN outperforms the Greedy strategy against these renewal strategies and converges to a higher final score. On average, Greedy dominates the renewal strategies. However, it is far from the optimal play and when opposed to unpredictable agents such as EAs, final scores highly vary from one iteration to another.

7.3 -Player Games

Figure 6: DQNA against two Periodic Agents

As previously stated, we extended the game to an -player game opposing different attackers. The only difference in -player games is the state of the game observed by the DQN agent. It corresponds to the agent’s knowledge of the game and therefore to all of its opponent’s last known moves which are learnt upon flipping. Hence, the state size increases with and is given as an input to the neural network. We begin by testing our model in a rather “easy” game setting; we oppose our DQN to two periodic agents with periods and keeping the opponents’ flips somewhat well spaced. Hypothetically, this scenario would be equivalent to playing against only one agent with a combination of these two strategies as these two overlap on most iterations. The results are shown in Figure [6].

We run multiple simulations of 2, 3 and 4-player FlipIt to analyze the variance of the reward; the DQN is opposed to agents (PAs, PAwRPs, EAs) with strategies chosen at random. The opponents’ parameters are chosen such that the combination of their strategy has an average move rate ranging from 0.01 to 0.1. We plot the average rewards obtained by the DQN in Figure [6(a)], showing how the average reward varies depending on number of players in the game and the opponents’ move rates. Clearly, the average reward decreases as the move rate and the number of players increase and this can be explained by the fact that when playing against multiple agents with high move rates, the interval between two consecutive flips becomes smaller and pushes the DQN to flip more often. It is also due to the fact that short intervals between flips increase the risk of flipping at an incorrect iteration and penalize the DQN; as a matter of fact, the worst-case scenario (flipping one iteration before the opponents) is only one shift away from the optimal strategy (flipping at the same time as the opponents). Nevertheless in all the game simulations, not only does the DQN achieve a higher score than all of its opponents but it also drives its opponents to negative scores penalizing them at each action decision as shown in Figure [6(b)]. For clarity, only the maximum score out of the opponents scores is shown when the number of opponents is greater than 1.

(a) Average rewards achieved by the DQN depending on
the number of players and opponent’s move rates
(b) Maximum average reward achieved by an opponent
depending on the number of players and opponent’s move rates
Figure 7: Results obtained when opposing DQNA against a different number of agents of different types

8 Conclusion

Cyber and real-world security threats often present incomplete or imperfect state information. We believe our framework is well equipped to handle the noisy information and to learn the optimal counter-strategy and adapt to different classes of opponents for the game FlipIt, even under partial observability. Such strategies can be applied to optimally schedule key changes to network security amongst many other potential applications. We intend on pursuing this project to analyze the use of reinforcement learning in different variants of the game, including larger action-spaced games and games with probabilistic moves. We would also like to adapt our model to prevent the DQN from dropping out of the game when playing against any AA. Furthermore, in our current -player setting, each agent is individually competing against all other agents. Instead of players, we would like to include teams of players and use our model to find and learn different strategies designed to allow agents from a same team to cooperate and coordinate in order to outplay the opposing player or team.


  • [1] T. Alpcan and M. T. Basar (2010-01-01) Network security: a decision and game-theoretic approach. Vol. 9780521119320, Cambridge University Press, United States (English (US)). External Links: Document, ISBN 9780521119320 Cited by: §3.
  • [2] A. L. Buczak and E. Guven (2016-Secondquarter) A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Communications Surveys Tutorials 18 (2), pp. 1153–1176. External Links: Document, ISSN Cited by: §3.
  • [3] D. Ding, Q. Han, Y. Xiang, X. Ge, and X. Zhang (2018) A survey on security control and attack detection for industrial cyber-physical systems. Neurocomputing 275, pp. 1674–1683. Cited by: §3.
  • [4] N. Falliere, L. O. Murchu, and E. Chien (2011) W32.stuxnet dossier. Symantec White Paper. Cited by: §2.
  • [5] X. Feng, Z. Zheng, P. Hu, D. Cansever, and P. Mohapatra (2015-10) Stealthy attacks meets insider threats: a three-player game model. In MILCOM 2015 - 2015 IEEE Military Communications Conference, Vol. , pp. 25–30. External Links: Document, ISSN Cited by: §3.
  • [6] A. Gueye, V. Marbukh, and J. C. Walrand (2012) Towards a metric for communication network vulnerability to attacks: a game theoretic approach. In GAMENETS, Cited by: §3.
  • [7] P. Hu, H. Li, H. Fu, D. Cansever, and P. Mohapatra (2015-04) Dynamic defense strategy against advanced persistent threat with insiders. In 2015 IEEE Conference on Computer Communications (INFOCOM), Vol. , pp. 747–755. External Links: Document, ISSN Cited by: §3.
  • [8] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. CoRR abs/1412.6980. Cited by: §6.2.
  • [9] H. Kunreuther and G. Heal (2003-03) Interdependent security. Journal of Risk and Uncertainty 26 (2), pp. 231–249. External Links: ISSN 1573-0476, Document, Link Cited by: §3.
  • [10] A. Laszka, G. Horvath, M. Felegyhazi, and L. Buttyán (2014) FlipThem: modeling targeted attacks with flipit for multiple resources. In Decision and Game Theory for Security, R. Poovendran and W. Saad (Eds.), Cham, pp. 175–194. External Links: ISBN 978-3-319-12601-2 Cited by: §3.
  • [11] A. Laszka, B. Johnson, and J. Grossklags (2013) Mitigating covert compromises. In Proceedings of the 9th International Conference on Web and Internet Economics - Volume 8289, WINE 2013, Berlin, Heidelberg, pp. 319–332. External Links: ISBN 978-3-642-45045-7, Link, Document Cited by: §3.
  • [12] A. Laszka, B. Johnson, and J. Grossklags (2013) Mitigation of targeted and non-targeted covert attacks as a timing game. In 4th International Conference on Decision and Game Theory for Security - Volume 8252, GameSec 2013, New York, NY, USA, pp. 175–191. External Links: ISBN 978-3-319-02785-2, Link, Document Cited by: §3.
  • [13] N. Milosevic, A. Dehghantanha, and K.-K.R. Choo (2017-07) Machine learning aided android malware classification. Computers & Electrical Engineering 61, pp. 266–274. Note: © 2017 Elsevier Ltd. This is an author produced version of a paper subsequently published in Computers & Electrical Engineering. Uploaded in accordance with the publisher’s self-archiving policy. Article available under the terms of the CC-BY-NC-ND licence ( Cited by: §3.
  • [14] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. A. Riedmiller (2013) Playing atari with deep reinforcement learning. CoRR abs/1312.5602. External Links: Link, 1312.5602 Cited by: §3.
  • [15] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis (2015-02) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529–533. External Links: Document, Link Cited by: §3.
  • [16] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Neural Information Processing Systems. Cited by: §6.2.
  • [17] V. Pham and C. Cid (2012) Are we compromised? modelling security assessment games. In Decision and Game Theory for Security, J. Grossklags and J. Walrand (Eds.), Berlin, Heidelberg, pp. 234–247. External Links: ISBN 978-3-642-34266-0 Cited by: §3.
  • [18] N. D. Schwartz and C. Drew (2011-06-08) RSA faces angry users after breach. New York Times, pp. B1. Cited by: §2.
  • [19] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis (2016-01) Mastering the game of go with deep neural networks and tree search. Nature 529 (7587), pp. 484–489. External Links: Document, Link Cited by: §3.
  • [20] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis (2018) A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science 362 (6419), pp. 1140–1144. External Links: Document, ISSN 0036-8075 Cited by: Reinforcement Learning in flipIt.
  • [21] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis (2017-10) Mastering the game of go without human knowledge. Nature 550 (7676), pp. 354–359. External Links: Document, Link Cited by: Reinforcement Learning in flipIt, §3.
  • [22] R. S. Sutton and A. G. Barto (2018) Reinforcement learning: an introduction. Second edition, The MIT Press. Cited by: §3.
  • [23] G. Tesauro (1995-03) Temporal difference learning and td-gammon. Commun. ACM 38 (3), pp. 58–68. External Links: ISSN 0001-0782, Link, Document Cited by: Reinforcement Learning in flipIt.
  • [24] M. van Dijk, A. Juels, A. Oprea, and R. L. Rivest (2013) FlipIt: the game of "stealthy takeover". J. Cryptology 26 (4), pp. 655–713. External Links: Link, Document Cited by: Reinforcement Learning in flipIt, §1, §3, §7.2.
  • [25] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas (2016) Sample efficient actor-critic with experience replay. CoRR abs/1611.01224. External Links: Link, 1611.01224 Cited by: §6.2.
  • [26] C. J. C. H. Watkins and P. Dayan (1992-05) Q-learning. Machine Learning 8 (3), pp. 279–292. External Links: ISSN 1573-0565, Document, Link Cited by: §3.
  • [27] L. Xiao, X. Wan, X. Lu, Y. Zhang, and D. Wu (2018) IoT security techniques based on machine learning. ArXiv abs/1801.06275. Cited by: §3.