Models we Can Trust: Toward a Systematic Discipline of (Agent-Based) Model Interpretation and Validation

We advocate the development of a discipline of interacting with and extracting information from models, both mathematical (e.g. game-theoretic ones) and computational (e.g. agent-based models). We outline some directions for the development of a such a discipline: - the development of logical frameworks for the systematic formal specification of stylized facts and social mechanisms in (mathematical and computational) social science. Such frameworks would bring to attention new issues, such as phase transitions, i.e. dramatical changes in the validity of the stylized facts beyond some critical values in parameter space. We argue that such statements are useful for those logical frameworks describing properties of ABM. - the adaptation of tools from the theory of reactive systems (such as bisimulation) to obtain practically relevant notions of two systems "having the same behavior". - the systematic development of an adversarial theory of model perturbations, that investigates the robustness of conclusions derived from models of social behavior to variations in several features of the social dynamics. These may include: activation order, the underlying social network, individual agent behavior.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/09/2017

A Novel Formal Agent-based Simulation Modeling Framework of an AIDS Complex Adaptive System

HIV/AIDS spread depends upon complex patterns of interaction among vario...
11/16/2021

Exploration of the Parameter Space in Macroeconomic Agent-Based Models

Agent-Based Models (ABM) are computational scenario-generators, which ca...
08/01/2019

Rebellion on Sugarscape: Case Studies for Greed and Grievance Theory of Civil Conflicts using Agent-Based Models

Public policy making has direct and indirect impacts on social behaviors...
11/25/2018

Evoplex: A platform for agent-based modeling on networks

Evoplex is a fast, robust and extensible platform for developing agent-b...
03/12/2021

Test case generation for agent-based models: A systematic literature review

Agent-based models play an important role in simulating complex emergent...
09/08/2021

A General Framework to Forecast the Adoption of Novel Products: A Case of Autonomous Vehicles

Due to the unavailability of prototypes, the early adopters of novel pro...
06/04/2020

How individual behaviors drive inequality in online community sizes: an agent-based simulation

Why are online community sizes so extremely unequal? Most answers to thi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Despite the recent surge in experimental studies of human behavior induced by the availability of (mostly online) social data, a large percentage of work in the mathematical and computational social sciences is still devoted to theorizing, that is building models of social phenomena, rather than analyzing social data. Whether mathematical or computational agent-based ones, a dizzying variety of models is proposed and analyzed in the scientific literature.

And yet, controversy (if not outright dissatisfaction) about the status and true meaning of such models, and of the modeling process itself, is prevalent throughout the social sciences. An example is the vivid debate on the role of (mathematical) models in economics [1]. Economic models can be interpreted as ”credible worlds” [2], analogies [3], (thought) experiments [4], parables/fables [5], intermediate byproducts of robustness analysis [6], or ludic devices similar to children’s toys [7]. In any case, the discussion about the robustness of scientific models, currently taking place in the Philosophy of Science literature [8], is highly relevant.

A similar debate takes place in the social simulation literature. The critical research problem is that of verifying and validating agent-based models (ABM) [9, 10] (in short, the V & V problem). Theoretical frameworks have been proposed that attempt to deal with this issue, such as the generative approach to social simulations [11, 12], model calibration, docking/alignment [13, 14], replication [15], model-to-model analysis [16], etc. But there is no consensus on what verification and validation mean (see also [17, 18, 19, 20]).

There are multiple reasons that make the V&V problem important and difficult. A first reason is scale: whereas Schelling [21] could conceive his celebrated segregation model using pen and paper only, recent simulations models and projects aim to reach global dimensions [22, 23, 24, 25, 26]. A second reason has to deal with the potential social consequences: social simulations increasingly serve as consultants to (and implicitly affect) public policy [27, 28, 29]. A dramatic illustration of this fact in the context of the global pandemic of 2020 has been the controversy around the recommendations of the Imperial College epidemiological model [30]. This has led to significant discussion in the social simulation community, illustrated e.g. by the programmatic article [31] and the subsequent comments (e.g. [32, 33, 34, 35, 36, 37]). A final reason that makes the V&V question difficult is the very nature of simulation models, incomplete abstractions of reality, subject to complex behavior [38] that often involves multiple types of emergence [39].

It has been noted [40] that the proposals put forward in the ABM literature often have an ad-hoc nature, and that a more systematic theory is needed. The goal of this paper is to advocate the use of logic and formal methods as useful tools for the systematic development of such theories. We discuss a number of ways in which this may happen, and outline several research challenges associated with our proposals. The distinctive feature of the kind of frameworks we advocate is that they require a highly unusual combination of two areas with very different languages: logic and formal methods [41], on one hand, sociological theory [42], on the other. Importantly, the logical frameworks we envision should actively seek to avoid becoming what Edmonds [43] called the “philosophical approach” to logic. Instead, they should attempt to formalize genuine aspects of social theory (e.g. organizational logic, see e.g. [44, 45]), help with addressing issues related to V&V, and serve as ”middleware” to the agent-based simulations, helping in advancing conclusions that are robust and believable.

2 Formal tools for social theory

Is there any role for logical formalizations in describing and analyzing social dynamics, in ABM in particular ? This is a question that seems to have been asked so many times, with so many different interpretations in mind that a complete survey of this literature would not be particularly enlightening. Early on, Elster [46] argued that “logical theory can be applied not only in the formalization of knowledge already obtained by other means, but that logic can enter in the creative and constructive phase of scientific work” (op.cit. pp. 1). He explored the role of quantified modal logic in describing social reality, with a particular focus towards developing his method as an alternative to Hegelian dialectics. Closer to present Hannan [47] (see also [48, 49]) proposed a rational reconstruction of social theory (organization science in particular) using techniques based on first-order predicate logic. Logical methods are, of course, well-established in economics. To give just one example, the so-called interactive epistemology program [50] is by now a classical part of theoretical economics, and a key ingredient of a recent proposal for a common foundation of all social sciences [51].

The use of logic-based methods would certainly not be controversial to a large part of the AAMAS audience: In fact, one could justifiably ask what is novel in such a proposal. After all, formal methods based on temporal logic are a particularly significant success story - techniques such as model checking [41] and runtime verification [52] lie behind eliminating errors in designing computer circuits, in writing software for technological artifacts (from remote controls and mobile devices to airplanes) or the Mars Rover [53]. Logical methods are widely used in in the area of multiagent systems [54, 55, 56]. Model checking techniques are useful in the verification of software agents [57, 58, 59] and auctions [60].

Yet, the above optimism seems not to be shared by the practicing social simulation community. The mentioned advances in software agents do not necessarily translate into corresponding advances on simulating social agents [61]. The techniques developed in the former literature rely too little on existing sociological knowledge, and address to an insufficient degree the concerns of social scientists. Unsurprisingly, they have been criticized (Edmonds [43], see also [62, 63, 64]) as ”not useful given the state of MAS” and ”not […] useful in either understanding or building MAS”.

We believe that logical methods can indeed help in increasing the reliability of conclusions derived from social simulations. However, to be useful, such logics have to be tailored to the needs of the social scientist, not defined as an object of intrinsic mathematical interest, and have properties that make them useful:

-

the logics to be developed should be expressive enough to help formalize not only game-theoretic aspects of social theories (see e.g. [65, 66, 67, 68]) but also a variety of aspects of sociological theory [42, 69]. We give in the sequel two examples of concepts that we would like to see formalized: stylized facts and social mechanisms.

-

the study of logical frameworks we propose should be driven by considerations related to their implementation in (and applications to) ABM. Their primary goal should not be that of enabling deductive reasoning about social phenomena. Instead, they should be used to formally specify the observed social facts, in a way that enables the construction of automated ”monitors” serving as ”middleware” between the social simulation and the decision support level by recognizing (and signaling) the emergence of the given fact in a given simulation run. Our proposal is naturally related to the recent call for the development of live simulations [70], i.e. continuously feeding a simulation model with real-world data. In contrast, however, our proposal is related to (automatically) extracting data from the simulation, and using it to understand in a more systematic fashion the unraveling of the social dynamics.

-

it is not that important whether deciding implication in the new logics is tractable (we can just run the simulation to see if a certain fact becomes true). However the model checking problem (given a description of the state of the world, is a stylized fact true in it ?) should have efficient algorithms (see also [71]).

-

one problem of significant importance is the monitoring question for a logical formula : given a sequence of ”states of the world” (corresponding to a simulation run) and a statement , how do we efficiently detect that becomes true at some point in ? This is a question pertaining to runtime verification [72], so we should use the inspiration from this literature but, given the rooting of the logical frameworks in social theory, it is likely that a simple adaptation of existing logics will not be enough.

-

several new research topics, motivated by our vision of studying ”robust” stylized facts observed from simulation runs, may gain preeminence. We give an example: the study of ”continuity” properties of parameterized families of logical statements, as we vary the parameters of a given model. The opposite scenario, that of emergence of critical points (phase transitions) in the properties of social systems (and in their logical description) is also interesting.

2.1 Parameterized logics of stylized facts: ”continuous statements” and ”phase transitions”

A first application domain for the logics we envision is the formal specification of stylized facts. There is no agreement what a stylized fact is (however, see [73], as well as [74] for some relevant philosophical work). To advance a working definition, according to the former paper, at least in microeconomics, ”stylized facts are currently understood as broad, but robust enough statistical properties pertaining to a certain economic phenomenon”.

The requirement that stylized facts are robust is crucial in deciding what is and what is not a useful stylized fact: consider e.g. the following trivial baseline scenario (only important as a pedagogical example): Each of agents may be in one of two states, and . Each agent prefers state to . Agents are scheduled at random; when scheduled, each agent changes its state according to the best-response dynamics, moving to the state that gives it the highest utility. Hence, when scheduled they will turn to state (and subsequently stay that way, even if scheduled again).

An obvious conclusion about the dynamics, and a candidate stylized fact, could be the following: eventually every agent will play strategy . This is not, however, a robust stylized fact. This can be seen by parameterizing the baseline model and modifying agent behavior: we will assume a single parameter . When scheduled, an agent will choose

with probability

and with probability . The baseline model corresponds to the case .

It is easy to see that the proposed stylized fact ceases to be true for , i.e. as soon as we move away from the baseline model. In other words, the conclusion that every agent eventually holds state is not robust to even the slightest variation in agents’ choice probability, hence it cannot be considered as a (robust) stylized fact. A more robust formulation is one that claims that agents’ state converges to a stationary distribution with each agent independently being in w.p. and w.p. . Note that:

-

to formalize the robust version of this stylized fact we don’t deal anymore with individual statements, but with parameterized families of logical statements. They encode a (single) social fact, expressed slightly differently across variations of the model.

-

in a very well-defined intuitive sense the baseline fact (all agents eventually adopt state ) is ”the limit”, as of the corresponding parameterized statements for . Existing logical frameworks cannot, however, deal with such examples: while probabilistic/continuous logical frameworks (and their model checking) exist and might be useful in ABM [75, 76], and parametricity is important in such settings [77], at the metalevel logic is still largely a discrete framework, with no concept of ”distance between statements”, or ”continuous limits of statements”

-

in other scenarios the continuous behavior of stylized facts is no longer true. Instead, social systems display phase transitions: abrupt changes in the validity of certain stylized facts beyond some critical value of a given parameter. While the study of phase transitions is a well established topic in Complex Systems and A.I. [78, 79], with phase transitions apearing even in settings relevant to model checking [80], the logical study of such ”phase transitions” is still a relatively underdeveloped area. An exception is the topic of ”zero-one laws” in the theory of random graphs [81]. There are many social phenomena where such concepts seem relevant. An example is the discussion about tipping points. Whether one talks about natural or social phenomena [82] there is a considerable interest in anticipating such tipping points [83]. In the theory of random graphs the characterization of monotone properties that have ”phase transitions” is fairly well understood: such properties have a ”global” nature, depending crucially on the presence of most of the edges of the network [84]. In contrast ”local properties”, e.g. the existence of a fixed subgraph, lack a phase transition [85]. The nature of logical theories in which one formulates the stylized facts also impacts the detection of tipping points: for instance, the emergence of the giant component in a random graph cannot be ”sensed” by first-order logic [86]. Finally, similar results exist in scenarios with a dynamical flavor: start with an empty graph, add random edges, measuring the time when a certain graph property appears. It may be possible to extend such results to settings relevant to ABM:

Challenge 1.

Develop a theory of logical frameworks that admit ”parameterized statements”, and study ”phase transitions” in such statements. Ideally this study would yield algorithmic methods to anticipate ”tipping points” in agent-based social simulations. Having such methods would operationalize the discussion about the robustness of stylized facts: to argue whether a given stylized fact holds in reality one could ask whether the parameters of the real world lie in the region of the parameter space where the stylized fact varies continuously.

2.2 Formalizing social mechanisms

It is not only (stylized) facts that are in need of a logical formalization. After all, in a social simulation we are not interested in facts only, but in illuminating the causal reasons that lead to their emergence. Often (e.g. in the area of Analytical Sociology [69]) such causal explanations involve social mechanisms [87, 88, 89].

There is little consensus what a social mechanism is: Hedström ([87] pp. 25) compiles a list of seven definitions (due to Bunge, Craver, Elster, Hedström and Swedberg, Little and Stinchcombe). Of these seven, the most useful is due to Machamer ([90], also [91, 92]). As paraphrased in [87] “mechanisms can be said to consist of entities (with their properties) and the activities that these entities engage in, either by themselves or in concert with other entities. These activities bring about change […]. A social mechanism, as here defined, describes a constellation of entities and activities that are organized such that they regularly bring about a certain type of outcome. We explain a social phenomenon by referring to the social mechanism by which such phenomena are regularly brought about”.

Social mechanisms are complemented by other approaches: Hedström lists covering-law explanations [93] and statistical explanations. These alternatives are not mutually exclusive: social mechanisms can, e.g., be sometimes inferred from statistical considerations; they can have themselves stochastic/statistical ingredients.

In any case, whatever social mechanisms are, they seem to have a complex structure: they can appear in families [94], can concatenate [95] and be hierarchically nested [91]. It seems, therefore, that:

-

Verifying and validating social models (including simulation models) needs to address issues pertaining to explanation and causality. Statistical testing guidelines pertaining to replication, such as those discussed in [9], or generative explanations such as those proposed in [11, 12] are necessary but not sufficient. On the other hand social mechanisms, being in one acception “interpretations in term of individual behavior of a model that abstractly reproduces the phenomenon that needs explaining” [94] naturally complete and complement these methods (see also [96]).

-

The role of social mechanisms in validating social models could be informally described as follows: simulations should reproduce known social mechanisms that are part of the expert knowledge in the area of concern and, of course, perhaps suggest new ones.

-

In accord with [97], “formalizing models is a prerequisite to illuminate social mechanisms” and may help in making this notion precise. As a consequence we propose the following

Challenge 2.

Give logical formalizations of the various notions of social mechanism in Analytical Sociology, and use these formalizations for the automatic recognition and inference of concrete social mechanisms in ABM runs.

2.3 Towards a systematic theory of adversarial model perturbations

It is clear by now that some form of robustness analysis [8] is crucial to the verification and validation of social models. The concept has been heavily discussed in the Philosophy of Science literature, and can be applied to both mathematical models (e.g. the robust Volterra principle [98]) and to ABM (see [99]).

In contrast there is relatively little work on approaches to robustness with a practical potential: it is known, for instance, that scheduling order can severely impact the conclusions derived from game-theoretic and related models [100, 101]: indeed, a rich literature on this topic has developed in the cellular automata community (e.g. [102, 103, 104]). A more general direction, the adversarial scheduling approach put forward in [105] (see also [106, 107, 108]) advocates the study of mathematical and computational models under generalized models of agent activation, as a way to increase the robustness of conclusions derived from these models. Paraphrasing [105], adversarial scheduling is specified by the following principles:

  • Start with a “base case” stylized fact , valid under a particular (scheduling) model, often random. Then attempt to ”break ” by creating adversarial schedulers under which no longer holds true.

  • Analyzing perhaps these examples, identify structural properties of the scheduling order that causally impact the validity of . Use these insights to generalize ”from below” by identifying classes of schedulers (including the random one) under which is valid.

  • In the process we may need to reformulate the original statement in a way that makes it hold under larger classes of schedulers, thus making it more robust.

As described above, adversarial scheduling is obviously important in increasing the robustness of conclusions drawn from mathematical models: But could something like this be systematically implemented, and be useful for (logic-based specifications of) social simulations as well ? We believe that the answer is positive, and are going to give a pedagogical example, using the baseline scenario from Section 2.1. Indeed, one can logically describe the candidate stylized fact in temporal logic as (”every agent will eventually hold state ”). Is such a statement true under adversarial scheduling ? The answer is clearly no: informally, an adversarial scheduler which never schedules a particular agent whose state is will preclude the system from reaching the state ”all ”. In other words, to ensure that the baseline stylized fact remains true under adversarial scheduling we need to require the scheduler to be fair. The random scheduler is fair (at least with probability , as the number of steps tends to ).

Could have we reached the above conclusion about the necessity of fairness in scheduling in a logical framework ? The answer is yes. To do so we need to consider a simple logical description of the the effect axioms corresponding to the baseline dynamics:

(”if an agent is scheduled then globally (from now on) the agent will have state ”; we formulated our axiom this way in order to avoid having to deal in this pedagogical example with the frame problem). Can we derive the statement , expressing the baseline stylized fact from the action axiom described above ? The answer is negative: to do so we would also need (”eventually every agent is scheduled”). However, backward chaining [109] applied to this example would identify the statement expressing scheduler fairness as a necessary condition for the validity of the stylized fact. This is evidence that adversarial scheduling might be feasible even for ABM, thus we propose the following:

Challenge 3.

Extend the theory of adversarial scheduling to more central models of social dynamics, including ABM.

Scheduling is not the only aspect of a mathematical or computational model which could be studied from an adversarial perspective. Many other aspects are susceptible of a similar treatment. For instance, in many game-theoretic and agent-based models the underlying dynamics takes place on a social network. One may vary this social network and attempt to understand the robustness of the baseline result to changes in the social network. The same can be done with (adversarial perturbations of) initial conditions. Some results in this direction have recently appeared [110].

Challenge 4.

Develop a theory of adversarial perturbation of social networks and initial conditions for models of social dynamics. Extend it and apply it to ABM.

2.4 When are two models ”the same” ?

The verification and validation problem is related to the the question in the title of this section: when can we really consider two such models, perhaps with different ontologies (e.g. system dynamics and ABM) as ”equivalent” ? Again, there is little agreement what a right answer may be. [14] argue that it is not enough to ”eyeball” the outputs of the two models. One of more interesting attempts at an answer is [99] (Chapter 8), where model equivalence is formalized as a ”weighted feature-matching” problem.

The theory of reactive systems [111] provides an elegant mathematical notion of system equivalence: in this setting, the equivalence of two reactive systems is formalized by the notion of bisimulation. A seminal theorem due to Hennesy and Milner [112] states that two bisimilar systems satisfy the same statements in a certain modal logic , and conversely. That is, bisimilar systems satisfy the same set of ”stylized facts” formalizable in . As impressive as this result is, there is a wide gulf between such theory and the realities of ABM. There are multiple reasons that bisimulation is inadequate for social simulation. The most important one is that bisimulation is too ”microscopic”: it requires the fact that every single move of one of the system is enabled in the corresponding state of the second system. In contrast, cross-validation of ABM is coarser and often qualitative [113]. In an ABM we don’t mean to reproduce the actions of every agents: it’s only macro patterns that we care about.

There is some hope, though, that such methods are relevant to the study of ABM after all: recent results [114, 115, 116], some even from AAMAS [117] have related bisimulation to game-theoretic scenarios. It is thus reasonable to propose

Challenge 5.

Develop a theory of (bi)simulation of (social) systems aligned with (an relevant to) the practice of V& V in ABM.

3 Conclusion

We believe that logical formalization plays an important rule in assessing (and increasing) the reliability of results in social simulations. We have highlighted a couple of research directions that (if successful) would orient and ground the current discussion on model validity in (computational) social sciences. We don’t believe that the directions we outlined are going to completely solve this problem. But, besides the obvious intellectual interest of developing such concepts, they may contribute to turning simulation and modeling in social settings from largely being an art (which still is now) to an engineering discipline.

References

  • [1] Robert Sugden. Credible worlds, capacities and mechanisms. Erkenntnis, 70(1):3–27, 2009.
  • [2] Robert Sugden. Credible worlds: the status of theoretical models in economics. Journal of economic methodology, 7(1):1–31, 2000.
  • [3] Itzhak Gilboa, Andrew Postlewaite, Larry Samuelson, and David Schmeidler. Economic models as analogies. The Economic Journal, 124(578):F513–F533, 2014.
  • [4] Uskali Mäki. Models are experiments, experiments are models. Journal of Economic Methodology, 12(2):303–315, 2005.
  • [5] Nancy Cartwright. Models: Parables v fables. In Beyond Mimesis and Convention, pages 19–31. Springer, 2010.
  • [6] Jaakko Kuorikoski, Aki Lehtinen, and Caterina Marchionni. Economic modelling as robustness analysis. The British Journal for the Philosophy of Science, 61(3):541–567, 2010.
  • [7] Adam Toon. Models as make-believe: Imagination, fiction and scientific representation. Springer, 2012.
  • [8] Michael Weisberg. Robustness analysis. Philosophy of Science, 73(5):730–742, 2006.
  • [9] R. Axelrod. Advancing the art of simulation in the social sciences. Complexity, 3(2):16–22, 1997.
  • [10] Claus Beisbart and Nicole J Saam. Computer simulation validation. Springer, 2019.
  • [11] J. Epstein. Agent-based computational models and generative social science. Complexity, 4(5):41–60, 1999.
  • [12] J. Epstein. Generative Social Science: Studies in Agent-based Computational Modeling. Princeton University Press, 2007.
  • [13] R. Axtell, R. Axelrod, J. Epstein, and M. Cohen. Aligning simulation models: a case study and results. Computational and Mathematical Organization Theory, 1:123–141, 1996.
  • [14] Bruce Edmonds and David Hales. Replication, replication and replication: Some hard lessons from model alignment. Journal of Artificial Societies and Social Simulation, 6(4), 2003.
  • [15] Uri Wilensky and William Rand. Making models match: Replicating an agent-based model. Journal of Artificial Societies and Social Simulation, 10(4):2, 2007.
  • [16] David Hales, Juliette Rouchier, and Bruce Edmonds. Model-to-model analysis. Journal of Artificial Societies and Social Simulation, 6(4), 2003.
  • [17] Günter Küppers and Johannes Lenhard. Validation of simulation: Patterns in the social and natural sciences. Journal of Artificial Societies and Social Simulation, 8(4):3, 2005.
  • [18] Riccardo Boero and Flaminio Squazzoni. Does empirical embeddedness matter? methodological issues on agent-based models for analytical social science. Journal of Artificial Societies and Social Simulation, 8(4):6, 2005.
  • [19] Paul Windrum, Giorgio Fagiolo, and Alessio Moneta. Empirical validation of agent-based models: Alternatives and prospects. Journal of Artificial Societies and Social Simulation, 10(2):8, 2007.
  • [20] Scott Moss. Alternative approaches to the empirical validation of agent-based models. Journal of Artificial Societies and Social Simulation, 11(1):5, 2008.
  • [21] Thomas C. Schelling. Dynamic models of segregation. Journal of Mathematical Sociology, 1:143–186, 1971.
  • [22] TRANSIMS web page. http://code.google.com/p/transims/, 2011.
  • [23] S. Eubank, H. Guclu, V.S. Anil Kumar, M.V. Marathe, A. Srinivasan, Z. Toroczkai, and N. Wang. Monitoring and mitigating smallpox epidemics: Strategies drawn from a census data instantiated virtual city. Nature, 429(6988):180–184, 2004.
  • [24] S. Bishop, D. Helbing, P. Lukowicz, and R. Conte. FuturICT: FET flagship pilot project, 2011.
  • [25] Christopher L Barrett, Keith R Bisset, Stephen G Eubank, Xizhou Feng, and Madhav V Marathe. Episimdemics: an efficient algorithm for simulating the spread of infectious disease over large realistic social networks. In SC’08: Proceedings of the 2008 ACM/IEEE Conference on Supercomputing, pages 1–12. IEEE, 2008.
  • [26] Christopher Barrett, Keith Bisset, Shridhar Chandan, Jiangzhuo Chen, Youngyun Chungbaek, Stephen Eubank, Yaman Evrenosoğlu, Bryan Lewis, Kristian Lum, Achla Marathe, et al. Planning and response in the aftermath of a large crisis: An agent-based informatics framework. In 2013 Winter Simulations Conference (WSC), pages 1515–1526. IEEE, 2013.
  • [27] W.L. Martinez. Modeling and simulation for defense and national security.

    Statistical methods in counterterrorism: game theory, modeling, syndromic surveillance, and biometric authentication

    , 2006.
  • [28] National Institute of Health. NIH study models H1N1 flu spread. Press release http://www.nigms.nih.gov/News/Results/h1n1092110.htm, 2010.
  • [29] Madhav Marathe and Anil Kumar S Vullikanti. Computational epidemiology. Communications of the ACM, 56(7):88–96, 2013.
  • [30] Neil Ferguson, Daniel Laydon, Gemma Nedjati Gilani, Natsuko Imai, Kylie Ainslie, Marc Baguelin, Sangeeta Bhatia, Adhiratha Boonyasiri, ZULMA Cucunuba Perez, Gina Cuomo-Dannenburg, et al. Report 9: Impact of non-pharmaceutical interventions (npis) to reduce covid19 mortality and healthcare demand. 2020.
  • [31] Flaminio Squazzoni, J Gareth Polhill, Bruce Edmonds, Petra Ahrweiler, Patrycja Antosz, Geeske Scholz, Émile Chappin, Melania Borit, Harko Verhagen, Francesca Giardini, et al. Computational models that matter during a global pandemic outbreak: A call to action. Journal of Artificial Societies and Social Simulation, 23(2), 2020.
  • [32] Carlos A de Matos Fernandes and Marijn A Keijzer. No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 2020.
  • [33] Patrick Steinmann, Jason R Wang, George AK van Voorn, and Jan H Kwakkel. Don’t try to predict COVID-19. if you must, use deep uncertainty methods. Review of Artificial Societies and Social Simulation, 2020.
  • [34] Bruce Edmonds. Basic modelling hygiene–keep descriptions about models and what they model clearly distinct. Review of Artificial Societies and Social Simulation, 2020.
  • [35] Edmund Chattoe-Brown. The policy context of covid19 agent-based modelling. Review of Artificial Societies and Social Simulation, 2020.
  • [36] Umberto Gostoli and Eric Silverman. Sound behavioural theories, not data, is what makes computational models useful. Review of Artificial Societies and Social Simulation, page 22, 2020.
  • [37] Bruce Edmonds. Good modelling takes a lot of time and many eyes. Review of Artificial, 2020.
  • [38] K. Sawyer. Social Emergence. Cambridge University Press, 2005.
  • [39] N. Gilbert. Varieties of emergence. In Proceedings of the Agent’02 Conference: Social agents: ecology, exchange, and evolution, Chicago, IL, 2002.
  • [40] Iris Lorscheid, Uta Berger, Volker Grimm, and Matthias Meyer. From cases to general principles: A call for theory development through agent-based modeling. Ecological Modelling, 393:153–156, 2019.
  • [41] G. Clarke, O. Grumberg, and U. Peled. Model checking. MIT Press, 1999.
  • [42] James S Coleman. Foundations of social theory. Harvard university press, 1994.
  • [43] B. Edmonds. How formal logic can fail to be useful for modelling or designing MAS. Regulated Agent-Based Social Systems, pages 1–15, 2004.
  • [44] Davide Grossi, Frank Dignum, Mehdi Dastani, and Lambèr Royakkers. Foundations of organizational structures in multiagent systems. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, pages 690–697, 2005.
  • [45] Virginia Dignum and Frank Dignum. A logic of agent organizations. Logic Journal of the IGPL, 20(1):283–316, 2012.
  • [46] J. Elster. Logic and society: Contradictions and possible worlds. Wiley New York, 1978.
  • [47] M.T. Hannan, L. Pólos, and G. Carroll. Logics of organization theory: Audiences, codes, and ecologies. Princeton University Press, 2007.
  • [48] G. Péli, J. Bruggeman, M. Masuch, and B.O. Nualláin. A logical approach to formalizing organizational ecology. American Sociological Review, pages 571–593, 1994.
  • [49] G.L. Peli, L. Pólos, and M.T. Hannan. Back to inertia: Theoretical implications of alternative styles of logical formalization. Sociological Theory, 18(2):195–215, 2000.
  • [50] R.J. Aumann. Collected papers-vol. 1. M.I.T. Press, 2000.
  • [51] H. Gintis. The bounds of reason: game theory and the unification of the behavioral sciences. Princeton University Press, 2009.
  • [52] H. Barringer, A. Goldberg, K. Havelund, and K. Sen. Rule-based runtime verification. In Verification, Model Checking, and Abstract Interpretation, pages 277–306. Springer, 2004.
  • [53] G. Brat, D. Drusinsky, D. Giannakopoulou, A. Goldberg, K. Havelund, M. Lowry, C. Pasareanu, A. Venet, W. Visser, and R. Washington. Experimental evaluation of verification and validation tools on martian rover software. Formal Methods in System Design, 25(2):167–198, 2004.
  • [54] M. Wooldridge. Reasoning about rational agents. MIT Press, 2000.
  • [55] M. Wooldridge. An introduction to multiagent syst. Wiley, 2002.
  • [56] Y. Shoham and K. Leyton-Brown. Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge University Press, 2009.
  • [57] M. Wooldridge, M. Fisher, M.P. Huget, and S. Parsons. Model checking multi-agent systems with MABLE. In Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2, pages 952–959. ACM, 2002.
  • [58] A. Lomuscio and F. Raimondi. Model checking knowledge, strategies, and games in multi-agent systems. In Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, pages 161–168. ACM, 2006.
  • [59] A. Lomuscio, H. Qu, and F. Raimondi. Mcmas: A model checker for the verification of multi-agent systems. In Computer Aided Verification, pages 682–688. Springer, 2009.
  • [60] Emmanuel M Tadjouddine, Frank Guerin, and Wamberto Vasconcelos. Abstractions for model-checking game-theoretic properties of auctions. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 3, pages 1613–1616, 2008.
  • [61] Virginia Dignum and Frank Dignum. Agents are dead. long live agents! In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pages 1701–1705, 2020.
  • [62] M. Fasli. Formal systems and agent-based social simulation equals null? Journal of Artificial Societies and Social Simulation, 7(4), 2004.
  • [63] F. Dignum, B. Edmonds, and L. Sonenberg. The use of logic in agent-based social simulation. Journal of Artificial Societies and Social Simulation, 7(4), 2004.
  • [64] B. Gaudou, A. Herzig, E. Lorini, and C. Sibertin-Blanc. How to do social simulation in logic: modelling the segregation game in a dynamic logic of assignments. Proceedings of MABS, pages 24–35, 2011.
  • [65] Rohit Parikh. Social software. Synthese, 132(3):187–211, 2002.
  • [66] Marc Pauly and Mike Wooldridge. Logic for mechanism design—a manifesto. In Proceedings of the 2003 Workshop on Game Theory and Decision Theory in Agent Systems (GTDT-2003), Melbourne, Australia

    . Citeseer, 2003.

  • [67] Johan Van Benthem. Logic in games. MIT press, 2014.
  • [68] Andrés Perea. Epistemic game theory: reasoning and choice. Cambridge University Press, 2012.
  • [69] P. Hedström and P. Bearman. The Oxford handbook of analytical sociology. Oxford University Press, USA, 2009.
  • [70] Samarth Swarup and Henning S Mortveit. Live simulations. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pages 1721–1725, 2020.
  • [71] Joseph Y Halpern and Moshe Y Vardi. Model checking vs. theorem proving: a manifesto. Artificial intelligence and mathematical theory of computation, 212:151–176, 1991.
  • [72] Ezio Bartocci, Yliès Falcone, Adrian Francalanza, and Giles Reger. Introduction to runtime verification. In Lectures on Runtime Verification, pages 1–33. Springer, 2018.
  • [73] Matthias Meyer. How to use and derive stylized facts for validating simulation models. In Computer Simulation Validation, pages 383–403. Springer, 2019.
  • [74] Luciano Floridi. The method of levels of abstraction. Minds and machines, 18(3):303–329, 2008.
  • [75] Marta Kwiatkowska, Gethin Norman, and David Parker. Prism 4.0: Verification of probabilistic real-time systems. In International conference on computer aided verification, pages 585–591. Springer, 2011.
  • [76] André Platzer. Logics of dynamical systems. In 2012 27th Annual IEEE Symposium on Logic in Computer Science, pages 13–24. IEEE, 2012.
  • [77] Davide Ancona, Angelo Ferrando, and Viviana Mascardi. Parametric runtime verification of multiagent systems. In AAMAS, volume 17, pages 1457–1459, 2017.
  • [78] P. Cheeseman, B. Kanefsky, and W. Taylor. Where the really hard problems are. In Proceedings of the 11th IJCAI, pages 331–337, 1991.
  • [79] Bart Selman. Stochastic search and phase transitions: Ai meets physics. In IJCAI (1), pages 998–1002, 1995.
  • [80] C. Moore, G. Istrate, D. Demopoulos, and M. Vardi. A continuous-discontinuous second-order transition in the satisfiability of a class of Horn formulas. Random Structures and Algorithms, 31(2):173–185, 2007.
  • [81] Joel Spencer. The strange logic of random graphs, volume 22. Springer Science & Business Media, 2001.
  • [82] Damon Centola, Joshua Becker, Devon Brackbill, and Andrea Baronchelli. Experimental evidence for tipping points in social convention. Science, 360(6393):1116–1119, 2018.
  • [83] Marten Scheffer. Foreseeing tipping points. Nature, 467(7314):411–412, 2010.
  • [84] E. Friedgut. Necessary and sufficient conditions for sharp thresholds of graph properties, and the k-SAT problem. With an appendix by J. Bourgain. Journal of the A.M.S., 12:1017–1054, 1999.
  • [85] N. Alon, P. Erdős, and J. Spencer. The probabilistic method. John Wiley and Sons, second edition, 1992.
  • [86] Saharon Shelah and Joel Spencer. Can you feel the double jump? Random Structures & Algorithms, 5(1):191–204, 1994.
  • [87] P. Hedström. Dissecting the social: on the principles of analytical sociology. Cambridge University Press, 2005.
  • [88] P. Hedström and R. Swedberg, editors. Social Mechanisms: An Analytical Approach to Social Theory. Cambridge University Press, 2006.
  • [89] P. Demeulenaere. Analytical sociology and social mechanisms. Cambridge University Press, 2011.
  • [90] P. Machamer, L. Darden, and C.F. Craver. Thinking about mechanisms. Philosophy of Science, 67(1):1–25, 2000.
  • [91] C.F. Craver. Role functions, mechanisms, and hierarchy. Philosophy of Science, pages 53–74, 2001.
  • [92] C.F. Craver. When mechanistic models explain. Synthese, 153(3):355–376, 2006.
  • [93] C. Hempel. Aspects of scientific explanation. Free Press, 1965.
  • [94] T. Schelling. Social mechanisms and social dynamics. in Social mechanisms: An analytical approach to social theory, pages 32–44, 1998.
  • [95] D. Gambetta. Concatenations of mechanisms. in Social mechanisms: An analytical approach to social theory, page 102, 1998.
  • [96] Lynne Hamill. Agent-based modelling: The next 15 years. Journal of Artificial Societies and Social Simulation, 13(4):7, 2010.
  • [97] F. Squazzoni. The micro-macro link in social simulation. Sociologica, 1(2), 2008.
  • [98] Michael Weisberg and Kenneth Reisman. The robust volterra principle. Philosophy of science, 75(1):106–131, 2008.
  • [99] Michael Weisberg. Simulation and similarity: Using models to understand the world. Oxford University Press, 2012.
  • [100] B. Huberman and N. Glance. Evolutionary games and computer simulations. Proceedings of the National Academy of Science of the USA, 90:7716–7718, 1993.
  • [101] Christopher Weimer, John O Miller, Raymond Hill, and Douglas Hodson. Agent scheduling in opinion dynamics: A taxonomy and comparison using generalized models. Journal of Artificial Societies and Social Simulation, 22(4), 2019.
  • [102] Nazim A Fates and Michel Morvan. An experimental study of robustness to asynchronism for elementary cellular automata. Complex Systems, 11:1–1, 1997.
  • [103] Olivier Bouré, Nazim Fates, and Vincent Chevrier. Probing robustness of cellular automata through variations of asynchronous updating. Natural Computing, 11(4):553–564, 2012.
  • [104] Nazim Fates and Michel Morvan. Perturbing the topology of the game of life increases its robustness to asynchrony. In International Conference on Cellular Automata, pages 111–120. Springer, 2004.
  • [105] Gabriel Istrate, Madhav V. Marathe, and S.S. Ravi. Adversarial scheduling analysis of discrete models of social dynamics. Mathematical Structures in Computer Science, 22(5):788–815, 2012. (journal version of [118]).
  • [106] G. Istrate, M. Marathe, and S. Ravi. Adversarial scheduling analysis of game-theoretic models of norm diffusion. Logic and Theory of Algorithms, pages 273–282, 2008.
  • [107] Shahrzad Shirazipourazad, Brian Bogard, Harsh Vachhani, Arunabha Sen, and Paul Horn. Influence propagation in adversarial setting: how to defeat competition with least amount of investment. In Proceedings of the 21st ACM international conference on Information and knowledge management, pages 585–594, 2012.
  • [108] Gabriel Istrate. Stochastic stability in schelling’s segregation model with markovian asynchronous update. In International Conference on Cellular Automata, pages 416–427. Springer, 2018.
  • [109] Russell Stuart and Norvig Peter. Artificial intelligence-a modern approach 4th ed., 2020.
  • [110] Ahad N. Zehmakan. Majority opinion diffusion in social networks: An adversarial approach. In Proceedings of the 35th AAAI conference (to appear), 2021. available as arxiv.org preprint arXiv:2012.03143.
  • [111] R Milner. Communication and Concurrency. Prentice-Hall, Inc., 1989.
  • [112] Matthew Hennessy and Robin Milner. Algebraic laws for nondeterminism and concurrency. Journal of the ACM (JACM), 32(1):137–161, 1985.
  • [113] Scott Moss and Bruce Edmonds. Sociology and simulation: Statistical and qualitative cross-validation. American journal of sociology, 110(4):1095–1131, 2005.
  • [114] Marc Pauly et al. Game constructions that are safe for bisimulation. JFAK-Essays Dedicated to Johan van Benthem on the Occasion of His 50th Birthday, 1999.
  • [115] Julian Gutierrez, Paul Harrenstein, Giuseppe Perelli, and Michael Wooldridge. Nash equilibrium and bisimulation invariance. In 28th International Conference on Concurrency Theory, page 17, 2017.
  • [116] Michael Wooldridge, Giuseppe Perelli, Paul Harrenstein, and Julian Gutierrez. Nash equilibrium and bisimulation invariance. Logical Methods in Computer Science, 15, 2019.
  • [117] Francesco Belardinelli, Rodica Condurache, Catalin Dima, Wojciech Jamroga, and Andrew V Jones. Bisimulations for verifying strategic abilities with an application to threeballot. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pages 1286–1295, 2017.
  • [118] Gabriel Istrate, Madhav V. Marathe, and S.S. Ravi. Adversarial models in evolutionary game dynamics. In Proceedings of the 13th ACM-SIAM Symposium on Discrete Algorithms (SODA’01). AMS-SIAM, 2001.