STV+AGR: Towards Practical Verification of Strategic Ability Using Assume-Guarantee Reasoning

03/02/2022
by   Damian Kurpiewski, et al.
0

We present a substantially expanded version of our tool STV for strategy synthesis and verification of strategic abilities. The new version provides a web interface and support for assume-guarantee verification of multi-agent systems.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

03/25/2022

Bisimulations for Verifying Strategic Abilities with an Application to the ThreeBallot Voting Protocol

We propose a notion of alternating bisimulation for strategic abilities ...
04/30/2021

Tracking and managing deemed abilities

Information about the powers and abilities of acting entities is used to...
03/08/2020

Strategic Abilities of Asynchronous Agents: Semantic Paradoxes and How to Tame Them

Recently, we proposed a framework for verification of agents' abilities ...
02/06/2013

On Stable Multi-Agent Behavior in Face of Uncertainty

A stable joint plan should guarantee the achievement of a designer's goa...
11/15/2017

Efficient Verification of Multi-Property Designs (The Benefit of Wrong Assumptions) (Extended Version)

We consider the problem of efficiently checking a set of safety properti...
10/05/2018

Towards a correct and efficient implementation of simulation and verification tools for probabilistic ntcc

We extended our simulation tool Ntccrt for probabilistic ntcc (pntcc) mo...
09/07/2021

Modelling Strategic Deceptive Planning in Adversarial Multi-Agent Systems

Deception is virtually ubiquitous in warfare, and should be a central co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Model checking of multi-agent systems (MAS) allows for formal (and, ideally, automated) verification of their relevant properties. Algorithms and tools for model checking of strategic abilities [1, 28, 9, 25] have been in development for over 20 years [2, 10, 6, 13, 7, 21, 8, 4, 3, 15, 20]. Unfortunately, the problem is hard, especially in the realistic case of agents with imperfect information [28, 5, 12].

In this paper, we propose a new extension of our experimental tool STV [19, 20] that facilitates compositional model checking of strategic properties in asynchronous MAS through assume-guarantee reasoning (AGR) [26, 11]. The extension is based on the preliminary results in [24], itself an adaptation of the AGR framework for liveness specifications from [22, 23].

2 Application Domain

Many important properties of MAS refer to strategic abilities of agents and teams. For example, the formula says that the autonomous cab can drive in such a way that no one gets ever killed, and expresses that the cab and the passenger have a joint strategy to arrive at the destination, no matter what the other agents do. Another intuitive set of strategic requirements is provided by properties of secure voting systems [27, 29]. As shown by case studies [16, 14, 18] practical verification of such properties is still infeasible due to state-space and strategy-space explosion. STV+AGR addresses the specification and verification of such properties, as well as a user-friendly creation of models to be verified.

3 Simple Voting Scenario

start


start

Figure 1: Two modules: a coercer[2] (up) and a voter (down)

To present the capabilities of STV+AGR, we designed an asynchronous version of the Simple Voting scenario [15]. The model consists of two types of agents, presented in Figure 1, and described below.

Voter. Every voter agent has three local variables:

: the vote being cast (, or );

: the vote value presented to the coercer ( or ), where means that the voter decided not to share her vote with the coercer;

: the punishment status ( or ). Each voter can also see the value of the variable of the coercer.

The voter first casts her vote, then decides whether to share its value with the coercer. Finally, she waits for the coercer’s decision to punish her or to refrain from punishment.

Coercer. The coercer[k] has one local variable for each of voters:

: whether the voter was punished or not ( or ). Moreover, he can observe the value of for each voter .

The coercer has two available actions per voter: to punish the voter or to refrain from punishment.

4 Formal Background

Modules. The main part of the input is given by a set of asynchronous modules inspired by [23], where local states are labelled with valuations of state variables. The transitions are valuations of input variables controlled by the other modules. The multi-agent system is defined by a composition of its modules.

Strategies. A strategy is a conditional plan that specifies what the agent(s) are going to do in every possible situation. Here, we consider the case of imperfect information memoryless strategies, represented by functions from the agent’s local states (or, equivalently, its epistemic indistinguishability classes) to its available actions. The outcome of a strategy from state consists of all the infinite paths starting from and consistent with the strategy.

Logic. Given a model and a state in the model, the  [17] formula holds in the pointed model iff there exists a strategy for agent that makes true on all the outcome paths starting from any state indistinguishable from . The semantics of coalitional abilities is analogous, for joint strategies of coalitions.

Assume-guarantee reasoning. The main idea is to cope with the state-space explosion by decomposing the goal of coalition into local goals , and verify them one by one against abstractions of each agent’s environment. An abstraction for is obtained by defining a single module, called the assumption, which guarantees that all the paths present in the original system have their counterparts in the composition of module and its associated assumption. Moreover, we use a distance between modules, based on shared synchronization actions, so that only “close” agents are taken into account when preparing the assumption for . This way, one can deduce the existence of a joint strategy to obtain from the existence of individual strategies that achieve local goals .

Automated generation of assumptions. The main difficulty in using assume-guarantee reasoning is how to define the right assumptions for the relevant modules. To this end, we propose an automated procedure that generates the assumptions, based on the subset of modules that are “close” the given module . The abstraction is obtained by composing all the “close” modules, abstracting away their state labels and variables except for the ones that are input to , as well as removing all their input variables which are not state variables of .

5 Technology

STV+AGR does explicit-state model checking. That is, the global states and transitions of the model are represented explicitly in the memory of the verification process. The tool includes the following new functionalities.

User-defined input. The user can load and parse the input specification from a text file that defines the groups of modules. The modules are local automata representing the agents. The groups define the partition for the assume-guarantee verification. Each group that describes the part of the coalition must also define the formula to be verified.

Web-based graphical interface. The generated models and the verification results are visualised in the intuitive web-based graphical interface. The GUI is implemented in Typescript and uses the Angular framework.

width=1.05center V Monolithic model checking Assume-guarantee verification #st #tr DFS Apprx #st #tr DFS Apprx 2 529 2216 <0.1 <0.1/Yes 161 528 <0.1 <0.1/Yes 3 12167 127558 <0.1 0.8/Yes 1127 7830 <0.1 <0.1/Yes 4 2.79e5 6.73e6 memout 7889 1.08e5 <0.1 0.8/Yes 5 memout 5.52e4 1.45e6 <0.1 11/Yes

Table 1: Results of assume-guarantee verification the asynchronous variant of Simple Voting (times given in seconds)

Evaluation. The assumption-guarantee scheme has been evaluated on the asynchronous variant of Simple Voting, using formula . Note that the coalition consisted of only one agent, which made the decomposition of the formula trivial. The results are presented in Table 1. The first column describes the configuration of the benchmark, i.e., the number of voters. Then, we report the performance of model checking algorithms that operate on the explicit model of the whole system vs. assume-guarantee verification. DFS is a straightforward implementation of depth-first strategy synthesis. Apprx refers to the method of fixpoint-approximation [15]; besides the time, we also report if the approximation was conclusive.

6 Usage

The tool is available at stv.cs-htiew.com. The video demonstration of the tool is available at youtu.be/1DrmSRK1fBA. Example specifications can be found at stv-docs.cs-htiew.com. The current version of STV+AGR allows to:

  • Generate and display the composition of a set of modules into the model of a multi-agent system;

  • Generate and display the automatic assumption, given a module and a distance bound;

  • Provide local specifications for modules, and compute the global specification as their conjunction;

  • Verify a formula for a given system (using the verification methods available in the STV package);

  • Verify a formula for a composition of a module and its automatic assumption (using the methods in STV);

  • Verify a formula for a composition of a module and a user-defined assumption (using the methods in STV);

  • Display the verification result, including the relevant truth values and the winning strategy (if one exists).

7 Conclusions

Much complexity of model checking for strategic abilities is due to the size of the model. STV+AGR addresses the challenge by implementing a compositional model checking scheme, called assume-guarantee verification. No less importantly, our tool supports user-friendly modelling of MAS, and automated generation of abstractions that are used as assumptions in the scheme.

Acknowledgement

The authors thank Witold Pazderski and Yan Kim for assistance with the web interface. The work was supported by NCBR Poland and FNR Luxembourg under the PolLux/FNR-CORE project STV (POLLUX-VII/1/2019), as well as the CHIST-ERA grant CHIST-ERA-19-XAI-010 by NCN Poland (2020/02/Y/ST6/00064).

References

  • [1] R. Alur, T.A. Henzinger, and O. Kupferman (2002) Alternating-time Temporal Logic. J. of the ACM 49, pp. 672–713. External Links: Document Cited by: §1.
  • [2] R. Alur, T.A. Henzinger, F.Y.C. Mang, S. Qadeer, S. Rajamani, and S. Tasiran (1998) MOCHA: modularity in model checking. In Proc. of CAV’98, LNCS, Vol. 1427, pp. 521–525. Cited by: §1.
  • [3] F. Belardinelli, A. Lomuscio, A. Murano, and S. Rubin (2017) Verification of broadcasting multi-agent systems against an epistemic strategy logic. In Proc. of IJCAI’17, pp. 91–97. Cited by: §1.
  • [4] F. Belardinelli, A. Lomuscio, A. Murano, and S. Rubin (2017) Verification of multi-agent systems with imperfect information and public actions. In Proc. of AAMAS’17, pp. 1268–1276. Cited by: §1.
  • [5] N. Bulling, J. Dix, and W. Jamroga (2010) Model checking logics of strategic ability: complexity. In Specification and Verification of Multi-Agent Systems, pp. 125–159. Cited by: §1.
  • [6] S. Busard, C. Pecheur, H. Qu, and F. Raimondi (2014) Improving the model checking of strategies under partial observability and fairness constraints. In Formal Methods and Software Engineering, LNCS, Vol. 8829, pp. 27–42. External Links: Document, ISBN 978-3-319-11736-2 Cited by: §1.
  • [7] P. Cermák, A. Lomuscio, F. Mogavero, and A. Murano (2014) MCMAS-SLK: a model checker for the verification of strategy logic specifications. In Proc. of CAV’14, LNCS, Vol. 8559, pp. 525–532. Cited by: §1.
  • [8] P. Cermák, A. Lomuscio, and A. Murano (2015) Verifying and synthesising multi-agent systems against one-goal strategy logic specifications. In Proc. of AAAI’15, pp. 2038–2044. Cited by: §1.
  • [9] K. Chatterjee, T.A. Henzinger, and N. Piterman (2010) Strategy Logic. Inf. and Comp. 208 (6), pp. 677–693. Cited by: §1.
  • [10] T. Chen, V. Forejt, M. Kwiatkowska, D. Parker, and A. Simaitis (2013) PRISM-games: a model checker for stochastic multi-player games. In Proc. of TACAS’13, LNCS, Vol. 7795, pp. 185–191. Cited by: §1.
  • [11] E.M. Clarke, D.E. Long, and K.L. McMillan (1989) Compositional model checking. In Proc. of LICS’89, pp. 353–362. External Links: Document Cited by: §1.
  • [12] C. Dima and F.L. Tiplea (2011) Model-checking ATL under imperfect information and perfect recall semantics is undecidable. CoRR abs/1102.4225. Cited by: §1.
  • [13] X. Huang and R. van der Meyden (2014) Symbolic model checking epistemic strategy logic. In Proc. of AAAI’14, pp. 1426–1432. Cited by: §1.
  • [14] W. Jamroga, Y. Kim, D. Kurpiewski, and P.Y.A. Ryan (2020) Towards model checking of voting protocols in uppaal. In Proc. of E-Vote-ID’20, LNCS, Vol. 12455, pp. 129–146. External Links: Document Cited by: §2.
  • [15] W. Jamroga, . Knapik, D. Kurpiewski, and Ł. Mikulski (2019) Approximate verification of strategic abilities under imperfect information. Artif. Int. 277. External Links: Document Cited by: §1, §3, §5.
  • [16] W. Jamroga, M. Knapik, and D. Kurpiewski (2018) Model checking the SELENE e-voting protocol in multi-agent logics. In Proc. of E-Vote-ID’18, LNCS, Vol. 11143, pp. 100–116. Cited by: §2.
  • [17] W. Jamroga, W. Penczek, T. Sidoruk, P. Dembinski, and A.W. Mazurkiewicz (2020) Towards partial order reductions for strategic ability. J. Artif. Intell. Res. 68, pp. 817–850. External Links: Document Cited by: §4.
  • [18] W. Jamroga, D. Kurpiewski, and V. Malvone (2021) Natural strategic abilities in voting protocols. In Proc. of STAST’20, Note: To appear Cited by: §2.
  • [19] D. Kurpiewski, W. Jamroga, and M. Knapik (2019) STV: model checking for strategies under imperfect information. In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems AAMAS 2019, pp. 2372–2374. Cited by: §1.
  • [20] D. Kurpiewski, W. Pazderski, W. Jamroga, and Y. Kim (2021) STV+Reductions: towards practical verification of strategic ability using model reductions. In Proc. of AAMAS’21, pp. 1770–1772. Cited by: §1, §1.
  • [21] A. Lomuscio, H. Qu, and F. Raimondi (2017)

    MCMAS: an open-source model checker for the verification of multi-agent systems

    .
    Int. J. Soft. Tools Tech. Trans. 19 (1), pp. 9–30. External Links: Document Cited by: §1.
  • [22] A. Lomuscio, B. Strulo, N.G. Walker, and P. Wu (2010) Assume-guarantee reasoning with local specifications. In Proc. of ICFEM’10, LNCS, Vol. 6447, pp. 204–219. External Links: Document Cited by: §1.
  • [23] A. Lomuscio, B. Strulo, N.G. Walker, and P. Wu (2013) Assume-guarantee reasoning with local specifications. Int. J. Found. Comput. Sci. 24 (4), pp. 419–444. External Links: Document Cited by: §1, §4.
  • [24] Ł. Mikulski, W. Jamroga, and D. Kurpiewski Towards assume-guarantee verification of strategic ability. In Proc. of AAMAS’22, Note: to appear Cited by: §1.
  • [25] F. Mogavero, A. Murano, G. Perelli, and M.Y. Vardi (2014) Reasoning about strategies: on the model-checking problem. ACM Trans. Comp. Log. 15 (4), pp. 1–42. Cited by: §1.
  • [26] A. Pnueli (1984) In transition from global to modular temporal reasoning about programs. In Logics and Models of Concurrent Systems, NATO ASI Series, Vol. 13, pp. 123–144. External Links: Document Cited by: §1.
  • [27] P.Y.A. Ryan (2010) The computer ate my vote. In Formal Methods: State of the Art and New Directions, pp. 147–184. Cited by: §2.
  • [28] P.Y. Schobbens (2004) Alternating-time logic with imperfect recall. Electr. Not. Theor. Comput. Sci. 85 (2), pp. 82–93. Cited by: §1.
  • [29] M. Tabatabaei, W. Jamroga, and P. Y. A. Ryan (2016) Expressing receipt-freeness and coercion-resistance in logics of strategic ability: preliminary attempt. In Proc. of PrAISe@ECAI’16, pp. 1:1–1:8. External Links: Document Cited by: §2.