Towards Realistic Threat Modeling: Attack Commodification, Irrelevant Vulnerabilities, and Unrealistic Assumptions

01/14/2018
by   Luca Allodi, et al.
TU Eindhoven
0

Current threat models typically consider all possible ways an attacker can penetrate a system and assign probabilities to each path according to some metric (e.g. time-to-compromise). In this paper we discuss how this view hinders the realness of both technical (e.g. attack graphs) and strategic (e.g. game theory) approaches of current threat modeling, and propose to steer away by looking more carefully at attack characteristics and attacker environment. We use a toy threat model for ICS attacks to show how a realistic view of attack instances can emerge from a simple analysis of attack phases and attacker limitations.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

12/03/2017

Kidemonas: The Silent Guardian

Advanced Persistent Threats or APTs are big challenges to the security o...
12/08/2021

Towards automation of threat modeling based on a semantic model of attack patterns and weaknesses

This works considers challenges of building and usage a formal knowledge...
08/29/2020

Off-Path TCP Exploits of the Mixed IPID Assignment

In this paper, we uncover a new off-path TCP hijacking attack that can b...
11/22/2019

Insider threat modeling: An adversarial risk analysis approach

Insider threats entail major security issues in geopolitics, cyber risk ...
03/06/2019

Attack Graph Obfuscation

Before executing an attack, adversaries usually explore the victim's net...
04/27/2018

Attacks and Defenses in Mobile IP: Modeling with Stochastic Game Petri Net

The urging need for seamless connectivity in mobile environment has cont...
02/25/2020

Double-Spend Counterattacks: Threat of Retaliation in Proof-of-Work Systems

Proof-of-Work mining is intended to provide blockchains with robustness ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Threat modeling: introduction and background

The importance of threat modelling in the assessment of one’s security posture is underlined by the numerous academic (Wang et al., 2008; Manadhata and Wing, 2011; Dolev and Yao, 1983; Roy et al., 2010), commercial (McMillen, 2017), and standardized approaches (Stoneburner et al., 2002; Brenner, 2007; Fredriksen et al., 2002) reported in the literature. In general, two broad categories for threat modelling can be identified in the literature:

1) Vulnerability and technical exposure

System vulnerabilities can be exploited by attackers to obtain unauthorized access to a system. The notion of ‘attack surface’ (Manadhata and Wing, 2011) reproduces the entry points of the attacker to then compute a metric of ‘exposure’ to attacks. The paths that an attacker can follow to reach a specific system or resource can be modeled through ‘attack graphs’, where each node is a vulnerability on a system, and edges represent the attacker’s possibility to ‘escalate’ to the next system (Wang et al., 2008). These graph often tend to explode as network size grows, making the identification of realistic paths to compromise difficult to achieve (Obes et al., 2013). In all approaches, the vulnerability plays a central role in the threat modeling: for example, the probability of traversing an attack graph alongside a certain path is typically computed as a function of some measurement of each vulnerability.

2) Strategic approaches

A different perspective is adopted by models that account for attacker strategies, often in response to a defensive posture by their target (Roy et al., 2010). These models are typically specified using classic game-theory, and solved looking for specific equilibria: for example, Nash solutions assume that each player (attacker and defender) aims at maximizing his/her own outcome irrespective of the choice of the other player (accounting instead for the possible decisions he/she may make); Stackelberg security games assume instead that the attacker’s decision depends on the observation of the defender’s strategy. The model solutions oftentimes assume that ‘all attackers look alike’ (Abbasi et al., 2016) or that players act over perfect information (Fudenberg and Tirole, 1991)

. These assumptions also underlie recent linear-programming solutions to the defender’s problem 

(Serra et al., 2015), where the attacker is implicitly assumed to be all-knowing with respect to the vulnerabilities on the system.

Despite the diversity in the adopted perspective or application scenario, all current approaches have an underlying assumption in common:

the attacker can arbitrarily choose whichever attack vector or sequence he/she thinks will maximize the function the model assigns to him or her

. For example, attack graphs enumerate all possible (known) vulnerabilities that the attacker may exploit; game theoretic approaches assume the attacker can choose whichever attack strategy he/she believes best, irrespective of the solution strategy; NIST 800-30 directives require the assessors to look at all possible attack paths toward an asset. This of course puts the defender at high strain as he/she needs to defend against all vulnerabilities (ranging in the thousands (NVD, 2015)), whereas the attacker only need one or a few, and can choose any.

2. Empirical views on attacks

Recent empirical findings, replicated independently over a number of studies, present results in sharp contrast with the general assumption that attackers may exploit any available vulnerability. Whereas from this threat model one would expect exploits to appear in attack signatures and network telemetries at comparable rates (i.e. ideally, uniformly), the empirical observation is widely different: out of tens of thousands of possible vulnerabilities only a fraction is actively exploited in the wild (even after controlling for observational biases) (Allodi and Massacci, 2014). This effect is even stronger when looking at the actual volumes of attacks driven by each (exploited) vulnerability, for which we observe heavy tail distributions of attacks (Nayak et al., 2014; Allodi, 2015). This effect also emerges for so-called 0-day vulnerabilities, i.e. vulnerabilities known to attackers before vendors have an opportunity to fix them (Bilge and Dumitras, 2012): over an historical record of attacks worldwide, Bilge and Dumitras find only less than twenty 0-days, of which two cumulatively carry two million attacks, and the remaining constitute negligible background noise five orders of magnitude lower.

The figures remain similar by looking at the exploits attackers trade in the black markets: exploit kits drive millions of attacks (Grier et al., 2012; Provos et al., 2008; Symantec, 2011), yet they embed five to ten exploits each at a maximum (Kotov and Massacci, 2013), a figure settled around 3-4 exploits per kit in recent years (Allodi, 2017; Ablon et al., 2014). At the same time, the vast majority of attacks in the wild are carried by this handful of vulnerabilities only (Allodi and Massacci, 2014; Allodi et al., 2013), and the refresh time of attacks in the wild is as slow at 600 days (i.e. the same exploits are re-used in the wild for almost two years before they are substituted at scale with a new attack) (Allodi et al., 2017). Similarly, attacks against Industrial Control Systems like SCADA, Distributed Control Systems, and PLCs, that form the skeleton of the critical infrastructure, seem to be relatively rare. While the severity of these attacks, especially when perpetrated at a national or infrastructural level, is certainly high, ICS incidents seem to affect a very limited fraction of the overall vulnerable infrastructure (McMillen, 2017; Cisco, 2017).

The present empirical view on attack distributions can not, clearly, justify the assumption that attackers will choose any vulnerability among those available to deliver their attack. Even if systems remained vulnerable at non-uniform rates (which appears not to be the case (Nappa et al., 2015)), one would expect the overall distribution of attacks to ‘spike’ for vulnerabilities preferred by attackers (e.g. those that get patched at a slower rate than the average), not for all vulnerabilities to be collapsed several orders of magnitude below the preferred set, a phenomenon all available data clearly shows (Allodi and Massacci, 2017a). This is clearly unacceptable because the defender needs to decide which vulnerabilities to fix first, and it becomes therefore critical to identify which vulnerabilities are most likely to drive orders of magnitude more attacks than the rest.

3. Attack generation

It remains therefore an open question to identify the ‘attack generation process’ that out of thousands of vulnerabilities selects only a small fraction that will deliver the vast majority of attacks. This is reflected in recent work, carried independently by other authors, questioning the soundness of current attacker assumptions (Shortridge, 2017), in particular in the ‘hunting’ of game-theoretic solutions that attackers operate under complete information, and act rationally in their decision process. In related work in the cognitive sciences, Veksler showed that, depending on the payoff matrix, game theoretic solutions to the attacker’s mixed strategy may lead the attacker to lose in 50% of the scenarios, i.e. the attacker is indifferent to the choice and randomly picks an attack strategy (Veksler, 2016). On this same line, Abbasi et al. conclude that a ‘homogeneous’ distribution of attackers can not be assumed when solving for equilibrium solutions to a game theoretic model (Abbasi et al., 2016). Overall, one can broadly distinguish between ‘commodified’ and ‘tailored’ attacks as threat sources:

1) Commodified attacks are generated by some pre-existing attack mechanism that is available to the attacker through the environment in which he or she operates. These report the vast majority of attacks (Provos et al., 2008; Nayak et al., 2014; Allodi et al., 2013). For example, an attacker may buy an exploit kit from the underground cybercrime markets (Grier et al., 2012; Allodi et al., 2015; Allodi, 2017), or a member of the self-proclaimed hacktivist organization Anonymous may largely use a pre-engineered tool to launch a distributed denial of service attack. Importantly, the update rate of these attacks is not realistically driven by the state of any specific target, but rather by the state of the population of possible targets: for example, an exploit kit gets realistically updated when the overall efficacy of the exploit it embeds drops below a certain (commercially viable) threshold (Allodi et al., 2017) (corresponding to the well-known economic process of ‘inaction’ first formalized by Stokey in 2008 (Stokey, 2008));

2) Tailored attacks are attacks that are carried against a specific target or a specific set of targets. These require some level of engineering or technical sophistication on top of the mere deployment of the attack on the side of the attacker. For example an attacker may need to modify an exploit to fit the specific environment of the target system, or engineer a new attack altogether after finding that none of the attacks in his/her possession matches an existing vulnerability (Allodi and Massacci, 2017b). Importantly, in these scenarios the attacker often acts under severe uncertainty in the topology, structure, and characteristics of the attacked network, which makes the identification of a ‘one-size-fit-all’ attack strategy impossible to achieve (Sarraute et al., 2012).

The former class of attacks has been recently covered in a number of papers highlighting the mismatch between empirical observations and attacker assumptions (Shortridge, 2017; Allodi et al., 2017; Nayak et al., 2014; Allodi and Massacci, 2017b). ‘Tailored’ attacks are however less investigated, on the one hand because much less common than ‘commodified’ attacks (Ten et al., 2008), and on the other because a clear rationalization of attacker characteristics remains uncertain (Grossklags et al., 2010). For example, multi-stage attacks where the attacker needs to overcome a set of perimetral defenses to breach a network and then target an attack once the ‘internal’ systems are revealed may require significant efforts in terms of reconnaissance and exploit engineering. This process may take several weeks or months to complete: whereas time-to-compromise is often proposed as a metric to evaluate ‘likelihood’ of attack (Holm, 2014; Nzoukou et al., 2013)

, it remains unclear at what level of the attacker strategy this has an effect. For example, a targeted (‘tailored’) attack against a governmental facility perpetrated by political activists requires a specific timing to maximize media coverage (e.g. close to the approval of a controversial law); similarly, espionage requires an attack to be completed before the target secret loses tangible value for the attacker (e.g. classified information affecting stock value of an organization), or a small criminal organization may not have the resources to dedicate months in finalizing an attack against an ICS system.

These effects should be accounted for in the same fashion K. Shortridge (Shortridge, 2017) covers in her work: by considering the attacker’s ability to learn during the attack, the set of information he/she possesses before launching the attack, and his/her risk aversion to, for example, failing the attack (much like adversarial modeling in terrorism contexts prescribes (Ezell et al., 2010)).

3.1. An example on a typical ICS system

Figure 1. Typical two-level ICS configuration. Picture taken from NIST Special Publication 800-82 (Stouffer et al., 2011).

Figure 1, drawn from NIST Special Publication 800-82 on ICS security (Stouffer et al., 2011), shows a typical multi-level ICS network whereby a control center communicates over one or many network layers with field sites where PLCs, specialized controllers, and sensors are typically located. In this structure, the systems in the control center are typically ‘regular Windows systems’(Shodan, Shodan) enabling SCADA control functionalities; these systems are ideally ‘air-gapped’ (Stouffer et al., 2011) (i.e. do not have any physical connection with other networks), but in practice they are often exposed to the open Internet to ease network management; the operation network is in the best case a VPN connecting the control systems to the peripheral devices. The systems in the field sites are ultimately the real target of an attack as they are the ones that cause the impact in the physical world (e.g. accelerating a turbine, shutting off a power generator, etc.) (Ten et al., 2008).

Following indications from the ICS-CERT,111https://ics-cert.us-cert.gov/content/overview-cyber-vulnerabilities. this setup forces the attacker to structure his attack in three stages, as follows: (1) Penetrate the first control center layer; (2) Analyze the internals of the target system; (3) Strike the target system(s).

A typical attack scenario involves a watering hole attack, or a targeted phishing email, to penetrate the first layers of the control center. In this stage the attacker can use a ‘commodified’ set of attacks to gain control over what is, from a vulnerability perspective, a regular (hopefully hardened) Windows system. The actual control systems may require additional effort to be reached once inside (e.g. because of network segregation) (Conway et al., 2016). From here, extensive levels of reconnaissance are typically needed to (1) identify the network and system configuration visible from the first attack stage, and (2) understand which attacks are needed to move to the next phase. For example, Kim Zetter’s coverage of the Ukrainian Power grid attack (Conway et al., 2016) summarizes:222https://www.wired.com/2016/03/inside-cunning-unprecedented-hack-ukraines-power-grid/

The initial intrusion got the attackers only as far as the corporate networks. But they still had to get to the SCADA networks that controlled the grid [..] so the attackers were left with two options: either find vulnerabilities that would let them punch through the firewalls or find another way to get in. [..] Over many months they conducted extensive reconnaissance, exploring and mapping the networks and getting access to the Windows Domain Controllers [..] Once they got into the SCADA networks, they slowly set the stage for their attack.

3.1.1. Toy model of a ‘tailored’ attack

Let us then consider an attacker that values the potential overall havoc that can be inflicted to the SCADA infrastructure as . A certain fraction of can be extracted from the first-phase attack (for example by spreading ransomware or by defacing a public server). As the first stage can be aided by commodified attacks, we assume the attacker succeeds with probability 1 (Kotov and Massacci, 2013; Allodi et al., 2013). The payoff for the first phase is:

(1)

where is the cost of performing the attack (e.g. to buy the exploit).

From the newly acquired position the attacker may start performing some level of reconnaissance on the internal network (stage 2), including the identification of other subnets, bridge systems to the SCADA control network, and/or the topology of the ICS network (see Fig. 1). Let us denote with the overall probability that the attacker successfully accomplishes this phase. Then, with probability the attacker extracts the remaining fraction of , i.e. and pays an upfront cost for completing phase-two. As discussed above, an attacker may wish to finalize the attack within a specific timeframe, e.g. because of limited resources to commit for prolonged attacks. The overall value of phase-two is discounted by a factor . We therefore set the two-phase attack profit as:

(2)

Figure 2

Figure 2. Numerical simulation of phase-one and phase-two attacks. .

reports a numerical simulation of the two models. Negative payoffs indicate areas where the attacker will not act. As one can see, the value of a two-phase attack quickly decays as increases (as ), and the rate of decrease increases as grows. This reflects the intuition that a two-phase attack quickly loses of appeal as time passes, and as the value that can be extracted from a phase-one attack increases.

3.1.2. Take-aways

From the example analysis above it emerges that attackers in the second phase will only materialize in the indicated area under the curve in Fig. 2: for example, hacktivists and other low-resource attackers will largely disregard phase-two attacks (which quickly lose of appeal as too many resources and time likely need be dedicated to complete the second phase), and focus instead on standard internet-facing systems, which appears indeed to be the case for most attacks of this type (Team, 2016; Bronk and Tikk-Ringas, 2013). For example, infecting an organization’s network with ransomware may be enough to cause the desired level of havoc, or a website defacement enough to reach the desired audience (e.g. the organization’s customers) (Team, 2016). The appeal of stage two attacks decreases the lower their overall impact on the targeted infrastructure is (i.e. as ). Differently, nation-state or powerful attackers may have the adequate resource set to endeavour in the second-stage attack, i.e. because they have a set of ready-to-use SCADA exploits, or they have a discount factor close to zero (i.e. they have resources to commit to the attack for long periods of time).

Under this model, the appearance of a new vulnerability in an ICS system carries little weight on the overall risk scenario, as most attackers would not be able to exploit it (unless commodified). Reasoning about the attacker’s ability to learn at every stage of the attacker can help in setting parameters of the model to realistic values and, critically, in understanding the correct shape of the learning function of the attacker. The presence of multiple attackers, or collusion of attackers with insiders, may be considered in the model as well, in the same fashion as terrorist cells are modeled in the risk analysis literature (Ezell et al., 2010).

Importantly, in this model we are assuming that the value of the attacked infrastructure is determined exogenously; in reality, attackers may adjust depending on their own specific interest. In this simple model, represents the mechanism that associates the value assigned by the attacker to the first stage attack. For example, a human rights hacktivist may be reluctant to exfiltrate private costumer data, and assign a high value to the defacement of public services of the target; on the other hand, a market competitor may have the opposite view. In reality, attacker payoffs may be influenced by several endogenous factors that determine his or her true preference; for example, an attacker may value success of attack more than impact of attack, i.e. prefers to select a strategy that yields the highest chances of success despite low consequences over a strategy with low probability of success but that delivers catastrophic impact (Ezell et al., 2010). These considerations are often reported in the risk analysis literature, but seldom considered in the cybersecurity field, where the ‘technical’ perspective oftentimes overcomes considerations on attacker skills, goals, and capabilities.

References

  • (1)
  • NVD (2015) 2015. NIST National Vulnerability Database (NVD). (2015). http://nvd.nist.gov [online] http://nvd.nist.gov.
  • Abbasi et al. (2016) Yasaman D Abbasi, Noam Ben-Asher, Cleotilde Gonzalez, Debarun Kar, Don Morrison, Nicole Sintov, and Milind Tambe. 2016. Know Your Adversary: Insights for a Better Adversarial Behavioral Model. In Proceeding of the Conference of Cognitive Science Society.
  • Ablon et al. (2014) Lillian Ablon, Martin C Libicki, and Andrea A Golay. 2014. Markets for cybercrime tools and stolen data: Hackers’ bazaar. Rand Corporation.
  • Allodi (2015) Luca Allodi. 2015. The Heavy Tails of Vulnerability Exploitation. In Proc. of ESSoS’15.
  • Allodi (2017) Luca Allodi. 2017. Economic Factors of Vulnerability Trade and Exploitation: Empirical Evidence from a Prominent Russian Cybercrime Market. In Proceedings of ACM CCS’17. ACM.
  • Allodi et al. (2015) Luca Allodi, Marco Corradin, and Fabio Massacci. 2015. Then and Now: On the Maturity of the Cybercrime Markets. IEEE Transactions on Emerging Topics in Computing (2015).
  • Allodi et al. (2013) Luca Allodi, Vadim Kotov, and Fabio Massacci. 2013. MalwareLab: Experimentation with Cybercrime attack tools. In Proc. of CSET’13.
  • Allodi and Massacci (2014) Luca Allodi and Fabio Massacci. 2014. Comparing vulnerability severity and exploits using case-control studies. ACM Transaction on Information and System Security (TISSEC) 17, 1 (8 2014).
  • Allodi and Massacci (2017a) Luca Allodi and Fabio Massacci. 2017a. Attack Potential in Impact and Complexity. In Proceedings of ARES.
  • Allodi and Massacci (2017b) Luca Allodi and Fabio Massacci. 2017b.

    Security Events and Vulnerability Data for Cybersecurity Risk Estimation.

    Risk Analysis, to appear (2017).
  • Allodi et al. (2017) Luca Allodi, Fabio Massacci, and Julian Williams. 2017. The Work-Averse Cyber Attacker Model. Evidence from two million attack signatures. In Published in WEIS 2017. Available at https://ssrn.com/abstract=2862299.
  • Allodi et al. (2013) Luca Allodi, Shim Woohyun, and Fabio Massacci. 2013. Quantitative assessment of risk reduction with cybercrime black market monitoring.. In In Proc. of IWCC’13.
  • Bilge and Dumitras (2012) Leyla Bilge and Tudor Dumitras. 2012. Before we knew it: an empirical study of zero-day attacks in the real world. In Proc. of CCS’12. ACM, 833–844.
  • Brenner (2007) Joel Brenner. 2007. ISO 27001: Risk management and compliance. Risk management 54, 1 (2007), 24.
  • Bronk and Tikk-Ringas (2013) Christopher Bronk and Eneken Tikk-Ringas. 2013. The cyber attack on Saudi Aramco. Survival 55, 2 (2013), 81–96.
  • Cisco (2017) Cisco. 2017. Annual Cybersecurity Report. Technical Report. Cisco.
  • Conway et al. (2016) Tim Conway, Robert M. Lee, and Michael J. Assante. 2016. Analysis of the cyber attack on the Ukrainian power grid. Defense use case. Technical Report. SANS ICS.
  • Dolev and Yao (1983) D. Dolev and A. Yao. 1983. On the security of public key protocols. IEEE Trans. on Inf. Th. 29, 2 (mar 1983), 198 – 208.
  • Ezell et al. (2010) Barry Charles Ezell, Steven P Bennett, Detlof Von Winterfeldt, John Sokolowski, and Andrew J Collins. 2010. Probabilistic risk analysis and terrorism risk. Risk analysis 30, 4 (2010), 575–589.
  • Fredriksen et al. (2002) Rune Fredriksen, Monica Kristiansen, Bjørn Axel Gran, Ketil Stølen, Tom Arthur Opperud, and Theodosis Dimitrakos. 2002. The CORAS framework for a model-based risk management process. In Safecomp, Vol. 2. Springer, 94–105.
  • Fudenberg and Tirole (1991) D. Fudenberg and J. Tirole. 1991. Game theory. MIT Press.
  • Grier et al. (2012) Chris Grier, Lucas Ballard, Juan Caballero, Neha Chachra, Christian J. Dietrich, Kirill Levchenko, Panayiotis Mavrommatis, Damon McCoy, Antonio Nappa, Andreas Pitsillidis, Niels Provos, M. Zubair Rafique, Moheeb Abu Rajab, Christian Rossow, Kurt Thomas, Vern Paxson, Stefan Savage, and Geoffrey M. Voelker. 2012. Manufacturing compromise: the emergence of exploit-as-a-service. In Proc. of CCS’12. ACM, 821–832.
  • Grossklags et al. (2010) Jens Grossklags, Benjamin Johnson, and Nicolas Christin. 2010. The price of uncertainty in security games. Economics of Information Security and Privacy (2010), 9–32.
  • Holm (2014) Hannes Holm. 2014. A Large-Scale Study of the Time Required to Compromise a Computer System. TDSC 11, 1 (2014), 2–15. https://doi.org/10.1109/TDSC.2013.21
  • Kotov and Massacci (2013) Vadim Kotov and Fabio Massacci. 2013. Anatomy of Exploit Kits. Preliminary Analysis of Exploit Kits as Software Artefacts. In Proc. of ESSoS 2013.
  • Manadhata and Wing (2011) Pratyusa K. Manadhata and Jeannette M. Wing. 2011. An Attack Surface Metric. TSE 37 (2011), 371–386. https://doi.org/10.1109/TSE.2010.60
  • McMillen (2017) David McMillen. 2017. Security attacks on industrial control systems. Technical Report. IBM.
  • Nappa et al. (2015) A. Nappa, R. Johnson, L. Bilge, J. Caballero, and T. Dumitras. 2015. The Attack of the Clones: A Study of the Impact of Shared Code on Vulnerability Patching. In Proc. of the 36th IEEE Symp. on Sec. & Privacy. 692–708.
  • Nayak et al. (2014) Kartik Nayak, Daniel Marino, Petros Efstathopoulos, and Tudor Dumitraş. 2014. Some Vulnerabilities Are Different Than Others. In Proc. of RAID’14. Springer, 426–446.
  • Nzoukou et al. (2013) William Nzoukou, Lingyu Wang, Sushil Jajodia, and Anoop Singhal. 2013. A unified framework for measuring a network’s mean time-to-compromise. In Reliable Distributed Systems (SRDS), 2013 IEEE 32nd International Symposium on. IEEE, 215–224.
  • Obes et al. (2013) Jorge Lucangeli Obes, Carlos Sarraute, and Gerardo Richarte. 2013. Attack planning in the real world. arXiv preprint arXiv:1306.4044 (2013).
  • Provos et al. (2008) Niels Provos, Panayiotis Mavrommatis, Moheeb Abu Rajab, and Fabian Monrose. 2008. All your iFRAMEs point to Us. In Proc. of USENIX’08. 1–15.
  • Roy et al. (2010) Sankardas Roy, Charles Ellis, Sajjan Shiva, Dipankar Dasgupta, Vivek Shandilya, and Qishi Wu. 2010. A survey of game theory as applied to network security. In System Sciences (HICSS), 2010 43rd Hawaii International Conference on. IEEE, 1–10.
  • Sarraute et al. (2012) Carlos Sarraute, Olivier Buffet, Jörg Hoffmann, et al. 2012. POMDPs Make Better Hackers: Accounting for Uncertainty in Penetration Testing.. In AAAI.
  • Serra et al. (2015) Edoardo Serra, Sushil Jajodia, Andrea Pugliese, Antonino Rullo, and V. S. Subrahmanian. 2015. Pareto-Optimal Adversarial Defense of Enterprise Systems. ACM Trans. Inf. Syst. Secur. 17, 3, Article 11 (March 2015), 39 pages. https://doi.org/10.1145/2699907
  • Shodan (Shodan) Shodan. ICS report. (????). https://www.shodan.io/report/HAEpJHKy
  • Shortridge (2017) Kelly Shortridge. 2017. BIG GAME THEORY HUNTING: THE PECULIARITIES OF HUMAN BEHAVIOR IN THE INFOSEC GAME. In Black Hat 2017.
  • Stokey (2008) Nancy L Stokey. 2008. The Economics of Inaction: Stochastic Control models with fixed costs. Princeton University Press.
  • Stoneburner et al. (2002) Gary Stoneburner, Alice Y Goguen, and Alexis Feringa. 2002. Sp 800-30. risk management guide for information technology systems. (2002).
  • Stouffer et al. (2011) Keith Stouffer, Joe Falco, and Karen Scarfone. 2011. Guide to industrial control systems (ICS) security. NIST special publication 800, 82 (2011), 16–16.
  • Symantec (2011) Symantec. 2011. Analysis of Malicious Web Activity by Attack Toolkits (online ed.). Symantec, Available on the web at http://www.symantec.com/threatreport/topic.jsp?
    id=threat_activity_trends&aid=analysis_of_malicious_web_activity.
    Accessed on June 1012.
  • Team (2016) Verizon RISK Team. 2016. Data breach digest. (2016).
  • Ten et al. (2008) Chee-Wooi Ten, Chen-Ching Liu, and Govindarasu Manimaran. 2008. Vulnerability assessment of cybersecurity for SCADA systems. IEEE Transactions on Power Systems 23, 4 (2008), 1836–1846.
  • Veksler (2016) Vladislav D. Veksler. 2016. Know Your Enemy: Applying Cognitive Modeling in Security Domain. In CogSci.
  • Wang et al. (2008) Lingyu Wang, Tania Islam, Tao Long, Anoop Singhal, and Sushil Jajodia. 2008. An Attack Graph-Based Probabilistic Security Metric. In Proc. of DAS’08. LNCS, Vol. 5094. Springer, 283–296.