Advanced Persistent Threats (APTs) are a significant concern for enterprises. In such scenarios, advanced adversaries take slow and deliberate steps over months and even years to compromise critical resources (e.g., workstations and servers) in a network. A key step in the kill chain of APTs is reconnaissance. Historically, reconnaissance is largely active, for example using network port scanning to identify which hosts are running which services. In response, many enterprises closely monitor their networks for scanning attacks.
Simultaneously, Software Defined Networking (SDN) technology is emerging as a powerful primitive for enterprise network security. SDN offers a global perspective on network communications between hosts. It can be used as an enhanced tool to identify network scanning, provide flexible access control to mitigate attackers bypassing defenses such as firewalls, and even prevent spoofing. However, the increased functionality within network elements (e.g., switches) makes them a target for attack. A compromised SDN switch is particularly dangerous, because it can perform reconnaissance passively. As a result, defenders may have little to no signal that an APT is in process.
We propose Snaz (”snag and zap”) to address the threat of passive network reconnaissance. Snaz uses honey traffic: fake flows deceptively crafted to make a passive attacker think specific resources (e.g., workstations and servers) exist and have specific un-patched, vulnerable software. Snaz assumes the adversary knows about the possibility of honey traffic and uses game theory to characterize how best to send honey traffic. For doing so, we demonstrate how a defender can either successfully deflect or deplete an adversary using optimal amount of honey traffic.
Snaz models this defender-attacker interaction as a two-player non-zero-sum Stackelberg game. In this game, the defender sends honey traffic to confound the adversary’s knowledge. However, if the defender sends too much honey traffic, the network may become overloaded. In contrast, the adversary wishes to act on information obtained using passive reconnaissance (e.g., a banner string indicating a server is running a vulnerable version of Apache). However, if the adversary acts on information in the honey traffic, it will unknowingly attack an intrusion detection node and be discovered. Thus, the game presents an opportunity to design an optimal strategy for defense. We make the following primary contributions:
|Target Type||Analysis Space||Examples|
|Fingerprinting OS||TTL, Packet Size, DF Flag, SackOk, NOP Flag, Time Stamp||Windows 2003 and XP|
|Server software, version,
|Default banners||Apache HTTP 2.2,
Windows Server 2003
|Flow-rule update frequency, controller-switch communication||Lack of TLS adoption, modified flow rules|
|Server-client traffic header and data||HTTP traffic, HTTPS traffic
with weak TLS/SSL
We propose Snaz, a technique of using honey traffic to mitigate the threat of passive network reconnaissance. This honey traffic has the potential to dissuade an adversary from acting on evidence of real vulnerabilities or more quickly reveal its existence within the network.
We model Snaz’s deceptive defense using a two-player non-zero-sum Stackelberg game. We present an algorithm for finding the optimal strategy for deploying honey flows that is fast enough to be used for realistic networks.
We present an empirical evaluation of the performance of our game model solutions under different conditions, as well as the scalability of the algorithm and some useful properties of the optimal solutions.
We emulate Snaz in Mininet[De Oliveira et al.2014], also showing the network overhead that results from honey traffic.
2 Background and Related Work
Enterprise network administrators are increasingly concerned with Advanced Persistent Threats (APTs) where adversaries first obtain a small foothold within the network and then stealthily expand their penetration over the course of months, sometimes years. The past decade has provided numerous examples of such targeted attacks, e.g., Carbanak [Group-IB and Fox-IT2014], OperationAurora [Matthews2019]
. Such attacks require significant planning. Initially, adversaries identify attack vectors including 1) vulnerable servers or hosts, 2) poorly configured security protocols, 3) unprotected credentials, and 4) vulnerable network configurations. To do so, they leverage network protocol banner grabbing, active port scanning, and passive monitoring[Kondo and Mselle2014, Bartlett, Heidemann, and Papadopoulos2007]. Examples of different types of desired information and corresponding attacks are shown in Table 1.
Software Defined Networking (SDN) has the potential to address operational and security challenges large enterprise networks [Levin et al.2014]. They provide flexibility to programmatically and dynamically re-configuring traffic forwarding within a network [McKeown et al.2008] and provide opportunities for granular policy enforcement [Kim and Feamster2013]. However, these more functional network switches form a large target for attackers as they can provide a foothold to perform data plane attacks using advanced reconnaissance and data manipulation and redirection [Antikainen, Aura, and Särelä2014]. Using one or more compromised switches, an adversary can learn critical information to mount attacks, including network topology and software and hardware vulnerabilities [Benton, Camp, and Small2013, Jero et al.2017].
Deception is an important tactic against adversary reconnaissance, and there have been a variety of different approaches that apply game-theoretic analysis to cyber deception [Pawlick, Colbert, and Zhu2019]. Many of these previous works have focused on how to effectively use honeypots (fake systems) as part of a network defense [Carroll and Grosu2011, Wagener et al.2009, Píbil et al.2012, Kiekintveld, Lisỳ, and Píbil2015]. This has included work on signaling games where the goal is to make real and fake systems hard to distinguish [Miah et al.2020]. Work on security games (including games modeling both physical and cybersecurity) focuses on deception to manipulate the beliefs of an attacker [Yin et al.2013, An et al.2011, Horák, Zhu, and Bošanskỳ2017, Thakoor et al.2019]. Another research [Schlenker et al.2018] proposes a game model of deception in the reconnaissance phase of an attack, though they do not consider honey flows. Stackelberg game models have been used to find optimal strategies for cyber-physical systems [Feng et al.2017].
This paper proposes Snaz, a deception system designed to mislead or delay the passive reconnaissance by an adversary. Snaz provides this deception using honey traffic that is precisely controlled by the defender. We now give a high level overview of the system and threat model.
Snaz uses honey traffic to mislead adversaries using passive network reconnaissance. This deception consists of network flows with fake information, which we call honey flows. Traditionally, a network flow is defined as a 5-tuple: source IP, source port, destination IP, destination port, and protocol (e.g., TCP). For simplicity, we assume honey flows include network flows in both directions to simulate real network communication.
Honey flows can fake information in network flow identifiers. For example, a honey flow can attempt to make the adversary believe a non-existent host has a specific IP address, or a host is running a server on a specific port. Due to the flexible packet forwarding capabilities of SDN, the defender can route honey flows through any path it chooses, e.g., to tempt an adversary that has compromised a switch on a non-standard path. Honey flows can fake information in the packet payload itself. For example, network servers often respond with a banner string indicating the version of the software, and sometimes even the OS version of the host. Attackers often use this banner information to identify unpatched vulnerabilities on the network. Honey flows can simulate servers with known vulnerabilities, making it appear as if there are easy targets. If at any point the adversary acts on this information (i.e., connects to a fake IP address), Snaz will redirect the traffic to an intrusion detection node. Since the intrusion detection node does not normally receive network connections, the existence of any traffic directed towards it indicates the presence of an adversary on the network.
Figure 1 shows a simplified example of honey flows causing an adversary to update its belief. The figure shows two real hosts: with vulnerability type , and is with vulnerability type . The adversary has compromised and observes all packets passing through it. Without honey traffic, the adversary can easily identify the vulnerabilities on the hosts (e.g., via banner strings) and attack them. In the figure, Snaz simulates the existence of two fake hosts ( and ) using honey traffic. If the adversary is unaware of Snaz, it will probabilistically attack either or and be quickly detected. However, if the adversary is aware of Snaz (the scenario we consider in this paper), it must keep track of what it believes is real verses fake information. How the defender and attacker act is the crux of our game-theoretic model in Section 4.
3.2 Threat Model and Assumptions
The adversary’s goal is to compromise networked resources, e.g., workstations and servers, without detection. The adversary does not know what hosts are on the network or which hosts have vulnerabilities. It must discover vulnerable hosts using network reconnaissance. The adversary assumes the defender has deployed state-of-the-art intrusion detection systems that can identify active network reconnaissance such as network port scanning. However, we assume the adversary has gained a foothold on one or more network switches (an upper-bound of which is defined by the model). Using this vantage point, the adversary is able to inspect all packets that flow through the compromised switches. In doing so, it can learn (1) network topology and which ports servers are listening to by observing network flow identifiers, (2) about the installed software versions by observing server and client banner strings. We assume the adversary can map between banner strings and known vulnerabilities and their corresponding exploits. We conservatively assume this mapping can occur on the switch, or can be done without the knowledge of the defender. The adversary also has the capability to initiate new network flows from the switch while forging the source IP address, as response traffic will flow back through the compromised switch and terminate as if it was delivered to the real host. Finally, we assume the adversary is rational and is aware of the existence of Snaz and that honey traffic may be sent to fake hosts. However, the adversary does not know the specific configuration of Snaz, such as the distribution of honey traffic.
We assume the defender’s network contains real hosts with exploitable vulnerabilities. The defender is aware of some, but not all of these vulnerabilities. For example, the defender’s inventory system may indicate the existence of an unpatched and vulnerable server, but due to production requirements, the server is not yet patched. We further assume the defender can identify valuations of each network asset and approximate the valuation of the assets to the attacker (e.g., domain controllers that authenticate users are valuable targets). We assume that the SDN controller and the applications running on the controller are part of the trusted computing base (TCB). We further assume that the communication between the SDN controller and uncompromised switches is protected and not observable to the adversary (e.g., via SSL or an out-of-band control network). As a result, the adversary cannot alter the controller configuration or forwarding logic of uncompromised switches. Finally, we assume that the network utilization is not near maximum capacity during normal operation. However, exceeding honey traffic may cause congestion and cause network degradation.
4 Game Model
An important question we must answer to deploy Snaz effectively is how to optimize the honey traffic created by the system, including how much traffic to create of different types. This decision must balance many factors, including the severity of different types of vulnerabilities, their prevalence on the network, and the costs of generating different types of honey flows (e.g., the added network congestion). In addition, a sophisticated APT attacker may be aware of the possible use of this deception technique, so the decisions should robust against optimal responses to honey traffic by such attackers. Finally, we note that many aspects of the environment can change frequently; for example, new zero day vulnerabilities may be discovered that require an immediate response, or the characteristics of the real network traffic may change. Therefore, we require a method for making fast autonomous decisions that can be adjusted quickly.
We propose a game theoretic model to optimize the honey flow strategy for Snaz. Our model captures several of the important factors that determine how flows should be deployed against a sophisticated adversary, but it remains simple enough that we can solve it for realistic problems in seconds (see Section 5 for details) allowing us to rapidly adapt to changing conditions. Specifically, we model the interaction as a two-player non-zero-sum Stackelberg game between the defender (leader) and an attacker (follower) where the defender (Snaz) plays a mixed strategy and the attacker plays pure strategy. This builds on a large body of previous work that uses Stackelberg models for security [Tambe2011], including cyber deception using honeypots [Píbil et al.2012].
|set of types of vulnerabilities in the network|
|number of real flows indicating|
|upper bound on the number of honey flows indicating|
|action of selecting honey flows for|
defender’s mixed strategy as the marginal probabilities over
|cost of creating each honey flow that indicates|
|the value the attacker gains from attacking a real or fake flow of type|
|the value the defender loses from an attack against a real or fake flow of type|
|denotes the action of attacking a flow of type where is the no-attack action, yielding 0 payoff|
We now formally define the strategies and utilities of the players using the notation listed in Table 2. We assume that the defender is using Snaz as a mitigation for a specific set of vulnerabilities that we label . Every flow on the network indicates the presence of at most one of these types of vulnerabilities in a specific host. The real network traffic is characterized by the number of real flows that indicate vulnerability type . The pure strategies for Snaz are vectors that represent the number of honey flows that are created that indicate each type of vulnerability ; we write to represent the marginal pure action of creating flows of type . These fake flows do not need to interact with real hosts; they can advertise the existence of fake network assets (i.e., honeypots). The defender can play a mixed strategy that randomizes the number of flows of each type that are created, which we denote by . To keep the game finite we define the maximum number of flows that can be created of each type as . The attacker’s pure strategy represents choosing to attack a flow of type , or not to attack. We assume the attacker cannot reliably distinguish real flow and honey flow, so an attack on a specific type corresponds to drawing a random flow from the set of all real and fake flows of this type.
The utilities for the players depend on which vulnerability type the attacker chooses, as well as on how many real and honey flows of that type are on the network. An attack on a real flow will result in a higher value for the attacker than on a honey flow of the same type, and vice versa for the defender. Specifically, if the attacker chooses type , it gains a utility , which is greater than or equal to the value for attacking a honey flow of the same type (which may be negative or 0). We assume that this component of the utility function is zero-sum, so the defender’s values are and . The defender’s utility function includes a cost term that models the marginal cost of adding each additional flow of type (for example, the additional network congestion which can vary depending on the type of flow). If the defender plays strategy and the attacker attacks the , the defender’s expected utility is defined as follows:
Here, denotes the probability of attacking a vulnerability of type which can be calculated as follows:
The overall cost for playing is given by Equation 1:
Analogously, for the attacker the expected utility is given by:
4.1 Snaz Game Example
Consider a network with two types of vulnerabilities. Let the values be , and , and the cost of creating honey flow indicating each type of vulnerability is . The total number real flows indicating each type of real vulnerability is , and the upper bound on honey flows is . Thus at most two honey flows of vulnerability type and three honey flows of vulnerability type can be created. Now, consider if the defender plays the following strategy :
In the defender creates one honey flow of the time and two honey flows of the time with type 1 vulnerability. The defender also creates three honey flows of the time type 2 vulnerability. The attacker’s best response is to attack vulnerability type 2 with expected utility , and the defender utility is .
4.2 Optimal Defender’s Linear Program
Our objective is to compute a Stackelberg equilibrium that maximizes the defender’s expected utility, assuming that the attacker will also play the best response. To determine the equilibrium of the game, we formulate a linear program (LP) where the attacker’s pure strategy
is a binary variable. We create a variable for each defender’s pure strategy, the action of creating honey flows for . The following LP computes the defender’s optimal mixed strategy for each type of vulnerability under the constraint that the attacker plays a pure-strategy best response:
In the above formulation the unknown variables are the defender’s strategy for each and the attacker’s action . Equation 3 is the objective function of the LP that maximizes the defender’s expected utility. The inequality in Equation 4 ensures that the attacker plays a best response. Finally, Equation 5
forces the defender strategy to be a valid probability distribution.
5 Simulations and Model Analysis
We now present some results of simulations based on our game-theoretic model, as well as with an initial implementation of honey flow generation in an emulated network environment. We show that the game theory model can produce solutions that improve over simple baselines, and can be calculated fast enough to provide solutions for realistic networks. We also examine how the optimal solutions change based on the parameters of the model to better understand the structure of the solutions and the sensitivity to key parameters of the decision problem.
5.1 Preliminary Testbed Evaluation
We have constructed a preliminary honey flow system in the emulated environment of Mininet [Mininet2018]. We want to show the possibility of generating plausible honey traffic in an emulated network. And also want to evaluate the effects of the deception in a more realistic context. We work on a small topology shown in Figure 2. As shown in the figure, we consider four real and two fake hosts. Some additional parameters:
Client 1 connects with Server 1, while Client 2 connects with Server 2. In the simulation, each of these two clients send 500 packets to the servers and receive corresponding replies.
The fake clients are connected with the system and can send packets with fake vulnerabilities to each other.
All the links in our simulation are 1 MBit/s bandwidth, 10ms delay and probability loss.
The values assigned to the servers are and real clients have a value of for both of attacker and defender.
We visualize the simulation results with two types of vulnerabilities in Figure 3. In this test, we increase the amount of honey flows from the fake clients from 0 to 500 packets. The honey flows from fake client 1 are with vulnerabilities type 1, while fake client 2 sends honey flows with vulnerabilities type 2. Besides, we experiment with different costs of honey flow generation. The dashed lines in Figure 3 shows the expected utilities when generate 1 honey flow with cost 0.001 and solid lines represent that defender has to endure cost 0.0001 to generate each honey flow.
This simulation does not use the game theoretic optimization. However, we can still get the idea that increasing the number of honey flows remarkably reduces the effectiveness of adversarial efforts. Meanwhile, we can also see that the cost of honey flow generation significantly affects the defender utility.
5.2 Snaz Game Theory Solution Quality
Our next set of experiments focuses on evaluating the solution quality of the proposed Stackelberg game model for optimizing Snaz honey flows compared to some plausible baselines, 1) not generating honey flows at all, 2) using a uniform random policy for generating honey flows. We average the results over 100 randomly generated games, each with 5 types of vulnerabilities. We set the number of real flows for each type to 500 and the upper bound on the number of honey flows for each type is uniform randomly generated from [500,1000]. Values are described in the caption, and we vary the costs of creating flows as shown in the Figure 4.
The results in Figure 4 show that the game theoretic solution significantly outperforms the two baselines in most settings, demonstrating the value of optimizing the honey flow generation based on the specific scenario. We also note that the cost has a significant impact on the overall result; with a high cost the game theories solution is similar to not generating flows at all (since they are not very cost effective). Random honey flow generation can be detrimental for the defender. With a low cost, the performance of the game theoretic solution is similar to the uniform random policy; since flows are so cheap it is effective to create a very large number of them without much regard to strategy. With intermediate costs the value of the strategic optimization is highest, which is the most likely scenario in real applications.
In our second experiment we consider vulnerabilities with different values and examine the variation in the optimal solution as we vary the number of real flows. We use vulnerabilities with the values of the real systems (, , , , ) and attacking any fake system gives . The cost is for all types. The results in Figure 5 show that the defender’s strategy is to create more honey flows for the high valued vulnerabilities. As the number of real flows increases the cost of adding flows to create a high ratio is substantial and the overall number created drops for all types.
5.3 Solution Analysis
We now analyze how the ratio of honey flows to real flows changes in the optimal solution as we change the number of real flows. In Figure 6, the network setup consists of four vulnerabilities with values of (10, 20, 30, 40) and fake flows with values of (9, 18, 27, 32). The cost of generating each honey flow is 0.1. We show the defender’s expected utility as we increase the ratio of honey flows to real flows. Each line represents a different number of real flows in the original game. We see that the defender utility increases as we add honey flows, but only up to a point; when the marginal value is less than the cost the optimal solution is to stop adding additional flows. We see this in the shape of the curves.
We note that the point where these curves flatten out is similar across all of the different numbers of real flows. We can also see this more clearly in Figure 7. This suggests that the optimal strategy is relatively insensitive to the number of real flows if we consider the ratio between honey and real flows as the solution. This means that our solutions are robust to changes in the number of real flows, and that we can approximate a solution very quickly without recomputing the full solution as the number of real flows changes.
5.4 Scalability Evaluation
In a practical application of Snaz we would need to be able to calculate the optimal strategy quickly, since the network may change frequently leading to different game parameters. For example, the number of real flows will change over time, as will the hosts in the network. The values of traffic and vulnerabilities, and the specific vulnerabilities we are most interested in can change also (e.g., due to the discovery of new vulnerabilities). We evaluate the scalability of the basic LP solution for this game as we increase the size of the game in two key dimensions: 1) increasing the number of vulnerability types, and 2) increasing the number of flows.
We randomly generate games holding the other parameters constant to evaluate the solution time. The results are shown in Figure 8. Though the solution time increases significantly as we increase the complexity of the game, we were able to solve realistic size games with a large number of flows and vulnerability types of interest within just a couple of seconds using this solution algorithm. This signifies that we can apply this to optimize honey flows in realistic size networks with a fast response rate; with further optimization we expect that the scalability could be improved significantly beyond this basic algorithm.
We introduced Snaz, a technique that uses deceptively crafted honey traffic to confound the knowledge gained by adversary through passive network reconnaissance. We defined a Stackelberg game model for optimizing one of the key elements of Snaz, the quantity and type of honey flows to create. This model balances cost and value trade-offs in the presence of a sophisticated attacker, but can still be solved fast enough to be used in a dynamic network environment. We have evaluated this model in both a preliminary emulation, as well as in simulations that explore the properties of the game theory solutions.
There are a number of ways that this model could be improved, including incorporating network structure and variable host values into the analysis, and allowing for more overlap between vulnerabilities and flows (e.g., flows with more than one vulnerability). We can consider additional types of honey traffic, such as modifying real flows in deceptive ways. However, these may come with significantly increased computational costs for finding solutions, so we will need to develop faster algorithms to make solving these more complex models practical for a real implementation.
This work was supported by the Army Research Office under award W911NF-17-1-0370.
[An et al.2011]
An, B.; Tambe, M.; Ordonez, F.; Shieh, E.; and Kiekintveld, C.
Refinement of strong stackelberg equilibria in security games.
Twenty-Fifth AAAI Conference on Artificial Intelligence.
- [Antikainen, Aura, and Särelä2014] Antikainen, M.; Aura, T.; and Särelä, M. 2014. Spook in your network: Attacking an sdn with a compromised openflow switch. In Bernsmed, K., and Fischer-Hübner, S., eds., Secure IT Systems, 229–244. Cham: Springer International Publishing.
- [Bartlett, Heidemann, and Papadopoulos2007] Bartlett, G.; Heidemann, J.; and Papadopoulos, C. 2007. Understanding passive and active service discovery. In Proceedings of the 7th ACM SIGCOMM Conference on Internet Measurement (IMC), 57–70. ACM.
- [Benton, Camp, and Small2013] Benton, K.; Camp, L. J.; and Small, C. 2013. Openflow vulnerability assessment. In Proceedings of the Second ACM SIGCOMM Workshop on Hot Topics in Software Defined Networking, HotSDN ’13, 151–152. New York, NY, USA: ACM.
- [Carroll and Grosu2011] Carroll, T. E., and Grosu, D. 2011. A game theoretic investigation of deception in network security. Security and Communication Networks 4(10):1162–1172.
- [De Oliveira et al.2014] De Oliveira, R. L. S.; Schweitzer, C. M.; Shinoda, A. A.; and Prete, L. R. 2014. Using mininet for emulation and prototyping software-defined networks. In 2014 IEEE Colombian Conference on Communications and Computing (COLCOM), 1–6. IEEE.
[Feng et al.2017]
Feng, X.; Zheng, Z.; Mohapatra, P.; and Cansever, D.
A stackelberg game and markov modeling of moving target defense.In International Conference on Decision and Game Theory for Security, 315–335. Springer.
- [Group-IB and Fox-IT2014] Group-IB, and Fox-IT. 2014. Anunak: Apt against financial institutions. https://www.group-ib.com/resources/threat-research/Anunak˙APT˙against˙financial˙institutions.pdf.
- [Horák, Zhu, and Bošanskỳ2017] Horák, K.; Zhu, Q.; and Bošanskỳ, B. 2017. Manipulating adversary’s belief: A dynamic game approach to deception by design for proactive network security. In International Conference on Decision and Game Theory for Security, 273–294. Springer.
- [Jero et al.2017] Jero, S.; Bu, X.; Nita-Rotaru, C.; Okhravi, H.; Skowyra, R.; and Fahmy, S. 2017. BEADS: Automated attack discovery in OpenFlow-based SDN systems. In Proceedings of the International Symposium on Research in Attacks, Intrusions, and Defenses (RAID), volume 10453 of LNCS, 311–333.
- [Kiekintveld, Lisỳ, and Píbil2015] Kiekintveld, C.; Lisỳ, V.; and Píbil, R. 2015. Game-theoretic foundations for the strategic use of honeypots in network security. In Cyber Warfare. Springer. 81–101.
- [Kim and Feamster2013] Kim, H., and Feamster, N. 2013. Improving network management with software defined networking. IEEE Communications Magazine 51(2):114–119.
- [Kondo and Mselle2014] Kondo, T. S., and Mselle, L. J. 2014. Penetration testing with banner grabbers and packet sniffers. Journal of Emerging Trends in computing and information sciences 5(4).
- [Levin et al.2014] Levin, D.; Canini, M.; Schmid, S.; Schaffert, F.; and Feldmann, A. 2014. Panopticon: Reaping the benefits of incremental SDN deployment in enterprise networks. In 2014 USENIX Annual Technical Conference (USENIX ATC 14), 333–345. Philadelphia, PA: USENIX Association.
- [Matthews2019] Matthews, T. 2019. Operation Aurora – 2010’s major breach by Chinese hackers. https://www.exabeam.com/information-security/operation-aurora/.
- [McKeown et al.2008] McKeown, N.; Anderson, T.; Balakrishnan, H.; Parulkar, G.; Peterson, L.; Rexford, J.; Shenker, S.; and Turner, J. 2008. Openflow: Enabling innovation in campus networks. SIGCOMM Comput. Commun. Rev. 38(2):69–74.
- [Miah et al.2020] Miah, M. S.; Gutierrez, M.; Veliz, O.; Thakoor, O.; and Kiekintveld, C. 2020. Concealing cyber-decoys using two-sided feature deception games. In Proceedings of the 53rd Hawaii International Conference on System Sciences.
- [Mininet2018] Mininet. 2018. Mininet an instant virtual network on your laptop (or other pc). http://mininet.org/.
- [Pawlick, Colbert, and Zhu2019] Pawlick, J.; Colbert, E.; and Zhu, Q. 2019. A game-theoretic taxonomy and survey of defensive deception for cybersecurity and privacy. ACM Computing Surveys (CSUR) 52(4):82.
- [Píbil et al.2012] Píbil, R.; Lisỳ, V.; Kiekintveld, C.; Bošanskỳ, B.; and Pěchouček, M. 2012. Game theoretic model of strategic honeypot selection in computer networks. In International Conference on Decision and Game Theory for Security, 201–220. Springer.
- [Schlenker et al.2018] Schlenker, A.; Thakoor, O.; Xu, H.; Fang, F.; Tambe, M.; Tran-Thanh, L.; Vayanos, P.; and Vorobeychik, Y. 2018. Deceiving cyber adversaries: A game theoretic approach. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, 892–900. International Foundation for Autonomous Agents and Multiagent Systems.
- [Tambe2011] Tambe, M. 2011. Security and game theory: algorithms, deployed systems, lessons learned. Cambridge university press.
- [Thakoor et al.2019] Thakoor, O.; Tambe, M.; Vayanos, P.; Xu, H.; and Kiekintveld, C. 2019. General-sum cyber deception games under partial attacker valuation information. In AAMAS, 2215–2217.
- [Wagener et al.2009] Wagener, G.; Dulaunoy, A.; Engel, T.; et al. 2009. Self adaptive high interaction honeypots driven by game theory. In Symposium on Self-Stabilizing Systems, 741–755. Springer.
- [Yin et al.2013] Yin, Y.; An, B.; Vorobeychik, Y.; and Zhuang, J. 2013. Optimal deceptive strategies in security games: A preliminary study. In Proc. of AAAI.