1 Introduction
Humans must possess beliefs in order to engage with their surrounding environments successfully, coordinate their activities, and be capable of communicating. Humans sometimes use arguments to influence others to act or realise a particular approach, to reach a reasonable agreement, and to collaborate together to seek the optimal possible solution to a particular problem. In light of this, it is not unexpected that many recent efforts to represent artificially intelligent agents have incorporated arguments and beliefs of their environment. Argumentationbased decisionmaking approaches are anticipated to be more in line with how people reason, consider possibilities and achieve objectives. This confers particular advantages on argumentationbased techniques, including transparent decisionmaking and the capability to provide a defensible rationale for outcomes.
In our recent work, we propose the use of explanations in autonomous pedagogical scenarios [Patil, 2022] i.e. how explanations should be tailored in multiagent systems (MAS) (teacherlearner interaction) as shown in Figure 1. It is rational to assume that autonomous agents in open, dynamic, and distributed systems will conform to a linguistic system for expressing their knowledge in terms of one or more ontologies that reflect the salient domain. Agents consequently must agree on the semantics (e.g. privacy) of the terms they use to organize the information, contextualise the environment, and represent different entities in order to engage or cooperate jointly. Abstract argumentation frameworks (AFs) [Dung, 1995, BenchCapon and Dunne, 2007] are naturally employed for modelling and effectively resolving such types of challenges. In both multiagent [Maudet et al., 2006] and singleagent [Amgoud and Prade, 2009] decisionmaking situations, AFs have been extensively utilised to describe behaviours since they can innately represent and reason with opposing information. Moreover, argumentative models have been presented due to the dialectic nature of AFs so that agents can cooperatively resolve issues or arrive at decisions by communicating implicitly [Dung et al., 2009].
Present studies of AFs, however, may not be immediately applicable to multiagent scenarios where agents could come across certain unexpected circumstances in their environment.AFs are naturally used for modelling dynamic systems since, in actuality, the argumentation process is inherently dynamic in nature [Falappa et al., 2011, Booth et al., 2013] and this comes with high computational complexity [Dunne and Wooldridge, 2009, Dunne, 2009].
To give a practical example, autonomous IntentBased Networking (IBN) [Campanella, 2019] captures and translates business intent into network policies that can be automated and applied consistently across the network. The goal is for the network to continuously monitor and adjust its performance to assure the desired business outcome. Intent allows the agent to understand the global utility and the value of its actions. Consequently, the autonomous agents can evaluate situations and potential action strategies rather than being limited to following instructions that human developers have specified in policies. In these cases, agents may adjust their model of the environment as well as their strategy according to information provided by the environment. There are several other circumstances in which the agent may not be able to guarantee a specific status of specific arguments and would necessitate assistance from other agents. Agents may not always know the optimal strategy until they form a coalition. In such circumstances, agents cannot merely compute semantics/conclusions from the ground up since it is not feasible. “Abstracting” AFs from the original(concrete domain) AF to via Abstract Interpretation can help compute semantics on a much smaller AF. Abstraction inherently are necessary if specific properties or specifications of the AF that are abstracted away from them are maintained.
The main contribution of this work is to investigate the semantic properties of the “abstract” AF from the “concrete” AF during the multiagent interactions. The term “abstraction” in this work pertains to the notion of abstraction from model checking. Abstraction of the state space may reduce the AF to a manageable size by clustering similar concrete states into abstract states, which can further facilitate verifying these abstract states. We summarise the primary research question as follows: Given a MAS in an uncertain environment, each with a specific subjective evaluation of a given set of conflicting arguments, how can agents reach a consensus whilst preserving specific semantic properties?
2 Method
Motivation for Abstract Interpretation:
Model checking[Clarke et al., 2000] is widely accepted as a powerful automatic verification technique for the verification of finitestate systems. Halpern and Vardi proposed the use of model checking as an alternative to the deduction for logics of knowledge [Halpern and Vardi, 1991]. Since then, model checking has been extended to multiagent systems[Hoek and Wooldridge, 2002].The state explosion issue is the main impediment to the tractability of model checking. Nevertheless, significant research has been done on this wellknown issue, and a variety of approaches have been proposed to circumvent the model checking limitation, including such symbolic methods with binary decision diagrams [Burch et al., 1992], SAT solvers [Biere et al., 1999], partial order reduction [Peled and Pnueli, 1994] and abstraction [Clarke et al., 2000].
In this work, we focus on abstract interpretation for computing dynamic semantics in MAS. The main point of abstract interpretation[Cousot and Cousot, 1977] is to replace the formal semantics of a system with an abstract semantics computed over a domain of abstract objects, which describe the properties of the system we are interested in. It formalises formal methods and allows to discuss the guarantees they provide such as soundness (the conclusions about programs are always correct under suitable explicitly stated hypotheses), completeness (all true facts are provable), or incompleteness (showing the limits of applicability of the formal method). Abstract interpretation is mainly applied to design semantics, proof methods, and static analysis of programs. The semantics of programs formally defines all their possible executions at various levels of abstraction. Proof methods can be used to prove (manually or using theorem provers) that the semantics of a program satisfy some specification, that is a property of executions defining what programs are supposed to do. Now, we provide a brief technical primer on key concepts in abstraction interpretation.
Posets: A partially ordered set (poset) is a set equipped with a partial order that is (1) reflexive: antisymmetric: ; and (3) transitive: . Let be a subset of the poset , then the least upper bound (lub/join) of (if any) is denoted as such that and . ( . , and the greatest lower bound (glb/meet) of (if any) is denoted as such that and . The poset has a supremum (or top) if and only if , and has an infimum (or bottom) iff .
Lattice and Complete Partial Order (CPO): A CPO is a poset with infimum such that any denumerable ascending chain has a least upper bound . A lattice is a poset such that every pair of elements has a lub and a glb in , thus every finite subset of has a lub and glb. A complete lattice is lattice with arbitary subset has a lub , hence a complete lattice has a supremum and an infimum .
Preorder and Equivalence Relation: A preorder is a binary relation that is reflexive and transitive, but not necessarily antisymmetric. Then is a equivalence relation that is reflexive, symmetric , and transitive. For any equivalence relation , the equivalence class of is defined as . The quotient set of by the equivalence relation is the partition of into a set of equivalence classes, i.e. . Furthermore, the preorder on can be extended to a relation on the quotient set such that . Hence, if is a preorder on , then is a partial order on the corresponding quotient set .
Abstraction and Galois connection: In the framework for abstract interpretation, Galois connections are used to formalise the correspondence between concrete properties (like sets of traces) and abstract properties (like sets of reachable states), in case there is always a most precise abstract property overapproximating any concrete property. Given two posets (concrete domain) and (abstract domain), the pair of functions (known as abstraction function and (known as concretisation function) forms a Galois connection iff which is mathematically represented as such that (1) and are monotonic; (2) is extensive (i.e. ; (3) is reductive (i.e. .
The rationale underpinning Galois connections is that the concrete properties in are approximated by abstract properties in is the most precise sound overapproximation of in the abstract domain and is the least precise element of that can be overapproximated by . The abstraction of a concrete property is said to be exact whenever , in other words, abstraction of property loses no information at all. Furthermore, we can say is a sound approximation of iff .
3 Discussion
In this section, we illustrate the nature of abstraction in AF and leverage the accrual of arguments whilst preserving the semantic information between them.
Example 3.1.
Consider the example provided in [Nielsen and Parsons, 2006], consisting of the following abstract arguments:

A1: Joe does not like Jack;

A2: There is a nail in Jack’s antique coffee table;

A3: Joe hammered a nail into Jack’s antique coffee table;

A4: Joe plays golf, so Joe has full use of his arms;

A5: Joe has no arms, so Joe cannot use a hammer, so Joe did not hammer a nail into Jack’s antique coffee table.
As we can see in Figure 2, that the argument attacks the argument , whereas arguments and directly attacks and defeats the argument .
In our work, employing the abstraction interpretation technique, the semantic relationship between arguments , and can be strengthened as , as shown in Figure 3. Through this simple example, we can reduce the representational complexity of large AFs which further reduces computational cost. This abstraction in multiagent dynamic AFs can be extended to many realms of argumentation, where auxiliary information (apart from simply winning or losing the argument) come into consideration. One such consideration involves hiding certain information from an opponent e.g. agents abstracting away sensitive and confidential information.
4 Conclusion
In this work, we introduced the notion of reducing the complexity of an abstract argumentation framework in a multiagent setting using abstraction principles from model checking to reduce representational as well as computational cost, which is usually caused due to increased number of arguments in the framework. Furthermore, due to the abstraction of the AF, it would be possible to develop succinct explanations for humans or other agents in the system.
Acknowledgements
The author thanks Timotheus Kampik for guidance and valuable insights in this project and the anonymous reviewers for their suggestions and feedback. This work was partially funded by the Knut and Alice Wallenberg Foundation.
References
 [Amgoud and Prade, 2009] Amgoud, L. and Prade, H. (2009). Using arguments for making and explaining decisions. Artificial Intelligence, 173(34):413–436.
 [BenchCapon and Dunne, 2007] BenchCapon, T. J. and Dunne, P. E. (2007). Argumentation in artificial intelligence. Artificial intelligence, 171(1015):619–641.
 [Biere et al., 1999] Biere, A., Cimatti, A., Clarke, E., and Zhu, Y. (1999). Symbolic model checking without bdds. In International conference on tools and algorithms for the construction and analysis of systems, pages 193–207. Springer.
 [Booth et al., 2013] Booth, R., Kaci, S., Rienstra, T., and Torre, L. v. d. (2013). A logical theory about dynamics in abstract argumentation. In International Conference on Scalable Uncertainty Management, pages 148–161. Springer.
 [Burch et al., 1992] Burch, J. R., Clarke, E. M., McMillan, K. L., Dill, D. L., and Hwang, L.J. (1992). Symbolic model checking: 1020 states and beyond. Information and computation, 98(2):142–170.
 [Campanella, 2019] Campanella, A. (2019). Intent based network operations. In 2019 Optical Fiber Communications Conference and Exhibition (OFC), pages 1–3. IEEE.
 [Clarke et al., 2000] Clarke, E., Grumberg, O., and Peled, D. (2000). Model checking cambridge.
 [Cousot and Cousot, 1977] Cousot, P. and Cousot, R. (1977). Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proceedings of the 4th ACM SIGACTSIGPLAN symposium on Principles of programming languages, pages 238–252.

[Dung, 1995]
Dung, P. M. (1995).
On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and nperson games.
Artificial intelligence, 77(2):321–357.  [Dung et al., 2009] Dung, P. M., Kowalski, R. A., and Toni, F. (2009). Assumptionbased argumentation. In Argumentation in artificial intelligence, pages 199–218. Springer.
 [Dunne, 2009] Dunne, P. E. (2009). The computational complexity of ideal semantics. Artificial Intelligence, 173(18):1559–1591.
 [Dunne and Wooldridge, 2009] Dunne, P. E. and Wooldridge, M. (2009). Complexity of abstract argumentation. In Argumentation in artificial intelligence, pages 85–104. Springer.

[Falappa et al., 2011]
Falappa, M. A., Garcia, A. J., KernIsberner, G., and Simari, G. R. (2011).
On the evolving relation between belief revision and argumentation.
The Knowledge Engineering Review
, 26(1):35–43. 
[Halpern and Vardi, 1991]
Halpern, J. Y. and Vardi, M. Y. (1991).
Model checking vs. theorem proving: a manifesto.
Artificial intelligence and mathematical theory of computation
, 212:151–176.  [Hoek and Wooldridge, 2002] Hoek, W. v. d. and Wooldridge, M. (2002). Model checking knowledge and time. In International SPIN Workshop on Model Checking of Software, pages 95–111. Springer.
 [Maudet et al., 2006] Maudet, N., Parsons, S., and Rahwan, I. (2006). Argumentation in multiagent systems: Context and recent developments. In International workshop on argumentation in multiagent systems, pages 1–16. Springer.
 [Nielsen and Parsons, 2006] Nielsen, S. H. and Parsons, S. (2006). A generalization of dung’s abstract framework for argumentation: Arguing with sets of attacking arguments. In International Workshop on Argumentation in MultiAgent Systems, pages 54–73. Springer.
 [Patil, 2022] Patil, M. S. (2022). Explainability in autonomous pedagogically structured scenarios. In 36th AAAI 2022 Workshop on Explainable Agency in Artificial Intelligence.
 [Peled and Pnueli, 1994] Peled, D. and Pnueli, A. (1994). Proving partial order properties. Theoretical Computer Science, 126(2):143–182.