Representation, Justification and Explanation in a Value Driven Agent: An Argumentation-Based Approach

12/13/2018
by   Beishui Liao, et al.
University of Connecticut
0

For an autonomous system, the ability to justify and explain its decision making is crucial to improve its transparency and trustworthiness. This paper proposes an argumentation-based approach to represent, justify and explain the decision making of a value driven agent (VDA). By using a newly defined formal language, some implicit knowledge of a VDA is made explicit. The selection of an action in each situation is justified by constructing and comparing arguments supporting different actions. In terms of a constructed argumentation framework and its extensions, the reasons for explaining an action are defined in terms of the arguments for or against the action, by exploiting their defeat relation, as well as their premises and conclusions.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

03/05/2019

An approach to Decision Making based on Dynamic Argumentation Systems

In this paper, we introduce a formalism for single-agent decision making...
12/11/2018

Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders

An autonomous system is constructed by a manufacturer, operates in a soc...
06/05/2021

Empirically Evaluating Creative Arc Negotiation for Improvisational Decision-making

Action selection from many options with few constraints is crucial for i...
02/16/2021

Value of Information for Argumentation based Intelligence Analysis

Argumentation provides a representation of arguments and attacks between...
01/18/2022

Explainable Decision Making with Lean and Argumentative Explanations

It is widely acknowledged that transparency of automated decision making...
06/04/2021

MultiOpEd: A Corpus of Multi-Perspective News Editorials

We propose MultiOpEd, an open-domain news editorial corpus that supports...
04/14/2021

Natural-Language Multi-Agent Simulations of Argumentative Opinion Dynamics

This paper develops a natural-language agent-based model of argumentatio...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

In the field of artificial intelligence, to improve the transparency and trustworthiness of autonomous systems, different approaches have been proposed to provide explanations for such systems, see, e.g.,

[Cocarascu, Čyras, and Toni2018, Shih, Choi, and Darwiche2018, Madumal et al.2018]. Working along these lines, we introduce an approach for representing, justifying and explaining the decision making of a value driven agent, or VDA in brief [Anderson, Anderson, and Berenz2017].

We define a VDA as an autonomous agent that decides its next action using an ethical preference relation over actions, termed a principle

, that is abstracted from a set of cases using inductive logic programming (ILP) techniques. For this purpose, each action of a VDA is associated with a set of values representing levels of satisfaction or violation of

prima facie

duties that that action exhibits. These duties either maximize or minimize ethically relevant features such as honoring commitments, maintaining readiness, harming the patient, etc. A case relates two actions. It is represented as a tuple of the differentials of the corresponding duty satisfaction/violation degrees of the actions being related as well as a determination as to which action is ethically preferable. Given a set of such cases, ILP is used to abstract from them a principle representing acceptable lower bounds of the differentials between corresponding duties of any two actions. Given such a principle, the decision making process of a VDA is as follows: sense the state of the world and abstract it into a set of Boolean perceptions, determine the vectors of duty satisfaction or violation of all actions with respect to this state by a decision tree learned from examples using ID3, and sort the actions in order of ethical preference using the principle such that the first action in the sorted list is the most ethically preferable one.

It is interesting to note that in the VDA there are several kinds of knowledge that can be used for justification and explanation, including the relation between perceptions and actions determined by the decision tree, the ethical consequences of an action represented by a vector of duty satisfaction or violation values of the action, disjuncts in the clause of the principle that are used to order two actions, and the cases from which these disjuncts are abstracted. However, this knowledge has not been explicitly represented and used for justification of actions or providing explanations. In this paper, we will address this problem by exploiting formal argumentation. The structure of this paper is as follows. Section 2 introduces a formalism for representing knowledge of a value driven agent. In Section 3, we present an argumentation-based justification and explanation approach. In section 4, we discuss some further research problems of justifying and explaining more sophisticated autonomous agents. Finally, we conclude the paper in Section 5.

Representing value driven agent

In this section, we first introduce a formal language and then use it to represent the knowledge and the model of a VDA.

The language of a VDA is composed of literals, perceptions, actions and duties.

Definition 1 (Language)

Let be a language consisting of a set of literals , a set of perceptions , a set of actions and a set of duties , where each literal is a propositional atom or the negation of an atom, and and are pairwise disjoint. For , we write just in case or .

Example 1 (Language)

In [Anderson, Anderson, and Berenz2019], there are 10 atoms of perceptions: low battery (lb), medication reminder time (mrt), reminded (r), refused medication (rm), fully charged (fc), no interaction (ni), warned (w), persistent immobility (pi), engaged (e), ignored waning (iw); 6 actions: charge, remind, engage, warn, notify and seek task; and 7 duties: maximize honor commitments (), maximize maintain readiness (), minimize harm to patient (), maximize good to patient (), minimize non-interaction (), maximize respect autonomy () and maximize prevent persistent immobility ().

In [Anderson, Anderson, and Berenz2019], . The state of the world is represented by a set of perceptions, called a situation in this paper.

Definition 2 (Situation)

A situation is a subset of , denoting a state of the world. The (infinite) set of situations is denoted as .

Example 2 (Situation)

An example of the state of the world: , , , , , .

In each situation, the duty satisfaction/violation values for each action are determined by a decision tree using the perceptions of the situation as input. A set of vectors of duty satisfaction/violation values of all actions in a situation is called an action matrix.

Definition 3 (Action matrix of a situation)

A duty satisfaction value is a positive integer, while a duty violation value is negative integer. In addition, if a duty is neither satisfied nor violated by the action, then the value is zero. Given an action and a situation , a vector of duty satisfaction/violation values for , denoted as , is a vector where is the satisfaction/violation value of . Then, an action matrix of a situation is defined as . The set of action matrices of all situations is denoted as .

In this definition, a vector of duty satisfaction/violation values for each action will be used to define the ethical preference over actions, by considering the ethical consequences of the actions in a given situation.

For brevity, when the order of duties is clear, is also written .

Example 3 (Action matrix of a situation)

Given the state of the world , according to [Anderson, Anderson, and Berenz2019], the action matrix of is , , where

.

.

.

.

.

.

The duties in each vector are , , , , , and in order.

Given a situation and its corresponding action matrix, actions are sorted by a principle. This principle is discovered by applying inductive logic programming (ILP) techniques to a set of cases. Clauses of the principle specify lower bounds of the differentials between corresponding duties of any two actions that must be met or exceeded to satisfy the clause.

Definition 4 (Principle)

A principle is defined as , where , where is a duty, and is the acceptable lower bound of the differentials between corresponding duties of any two actions in .

For brevity, when the order of duties is clear, in a principle the lower bound of the differentials between duties is also written as .

Example 4 (Principle)

According to [Anderson, Anderson, and Berenz2019], we have where

These 10 elements in correspond to 10 disjuncts of the principle in [Anderson, Anderson, and Berenz2019].

Given a principle and two vectors of duty satisfaction/violation values, we may define a notion of ethical preference over actions. Let and be vectors of duty satisfaction/violation values. In the following definition, we use to denote .

Definition 5 (Ethical preference over actions)

Given a principle and two vectors of duty satisfaction/violation values and of actions and respectively, we say that is ethically preferable (or equal) to with respect to some , denoted as , if and only if for each in and , it holds that .

In this definition, we make explicit the disjuncts in the clause of the principle that are used to order two actions.

Given two actions and , there might be more than one clause of , say , such that and . In order to compare relations with respect to different clauses of a principle, we introduce a notion of the relevance of a clause to an action in terms of the distance between a clause of a principle and a vector of duty satisfaction/violation values of the action. The intuition is that the larger the distance between them, the lesser the relevance between them.

Definition 6 (Relevance of principle clause)

Let be a vector of duty satisfaction/violation values of and a clause of . The relevance of to is defined as the distance between vectors and , written as , such that .

Example 5 (Relevance of principle clause)

Consider the distance between and , and respectively.

  • .

  • .

  • .

Definition 7 (Ethical preference over actions - cont.)

We say that is ethically preferable (or equal) to with respect to its most relevant clause if and only if and there exists no such that and .

Example 6 (Ethical preference over actions - cont.)

Consider the actions and . For each , . Among them, is the most relevant clause to the action .

Based on the above notions, a value driven agent (VDA) is formally defined as follows.

Definition 8 (Value driven agent)

A value driven agent is a tuple where .

In a VDA, given a situation and an action matrix, a set of solutions can be defined as follows.

Definition 9 (Solution)

Let be a value driven agent. Given a situation and an action matrix , a solution of with respect to is if and only if there is an ordering over with respect to such that is the first action in the sorted list. The set of all solutions of with respect to is denoted as such that is a solution of w.r.t. .

According to Definition 9, we directly have the following proposition.

Proposition 1 (The number of solutions)

Given , a situation and an action matrix , there are solutions of if and only if there are different orderings of with respect to such that there are different actions in the head of the sorted lists.

Now, let us make explicit other knowledge that is implicit in a VDA , i.e., the relation between a situation and an action with its duty satisfaction/violation values. This relation is implied by the decision tree. In this paper, it is represented as a defeasible rule, called an action rule.

Definition 10 (Action rule)

Let be a VDA, where . An action rule of under a situation is , where is a label of the rule, is a perception, is an action, is a vector of the duty satisfaction/violation values of in the situation .

An action rule can be read as if , hold, then performing action will presumably bring the ethical consequence . The set of all action rules of under situation is denoted as . Action rules can be automatically and dynamically generated and updated according to the data from a VDA.

For convenience, in is also written as .

Example 7 (Action rule)

Continuing Example 3. Given , there are six defeasible rules, where .

.

.

.

.

.

.

Note that in this example, it is possible for an action to not satisfy any duties and there might be no effect on the conclusion of decision making if one removes those rules associated with them. However, theoretically, we have not ruled out the possibility that no action satisfies any duty in some situations, where the most preferable action would then be the one that violated duties the least. So, we remove nothing, keeping all rules constructed in terms of Definition 10.

In summarizing this section, we may conclude that the formal model of a VDA and the set of action rules properly capture the underlying knowledge of the VDA. In next section, we will introduce an argumentation-based approach for the justification and explanation of the decision-making a VDA.

Argumentation-based justification and explanation

Argumentation in artificial intelligence is a formalism for representing and reasoning with inconsistent and incomplete information [Dung1995]. It also provides various ways for explaining why a claim or a decision is made, in terms of justification, dialogue, and dispute trees [Čyras, Satoh, and Toni2016], etc. In this section, we show how to justify and explain decision making of a VDA by using argumentation.

In terms of structured argumentation (e.g., ASPIC [Modgil and Prakken2014]), there are three basic notions including arguments, relations between arguments and argumentation semantics. We will define them in the setting of this paper. First, an argument is a set of reasons supporting a claim or an action. In what follows, for a given argument, the function returns its conclusion, and returns all its sub-arguments.

Definition 11 (Argument)

Let be a value driven agent, where . Argument in a situation is

  • if with and .

  • such that there exists for some .

The set of arguments of in a situation is denoted as .

Second, the relations between arguments include subargument relation, attack relation and defeat relation. We say that argument is a subargument of if and only if . For the attack relation, we have the following definition.

Definition 12 (Attack relation between action arguments)

An action argument attacks another action argument in a situation if and only if and for some actions and , and .

When one argument attacks another argument , if the priority of is at least as high as that of , then we say that defeats . The notion of priority over action arguments is as follows.

Definition 13 (Priority over action arguments)

Given a principle , an action argument is at least as preferred as another action argument in a situation with respect to some , denoted as , if and only if and for some actions and , and , such that is the most relevant clause with respect to .

According to Definition 13, since and implies , we have the following proposition.

Proposition 2 (Transitivity of priority relation)

Priority relation over action arguments is transitive.

Definition 14 (Defeat relation between action arguments)

Given a principle and , argument defeats argument with respect to , denoted as , if and only if attacks , and . The set of defeats between arguments of in a situation is denoted as .

When combining a set of arguments and the defeat relation, we get an abstract argumentation framework (or briefly, AAF), a notion originally proposed in [Dung1995].

Definition 15 (AAF of a VDA in a situation)

An AAF of a value driven agent in a situation is .

Example 8 (AAF of a VDA in a situation)

Continuing Example 7. We have the following 16 arguments (visualized in Fig. 1):

:      :    :    : :

: :      :    :       :

:

:

:

:

:

:

Note, for instance, that and attack each other. Moreover, since where and is the most relevant clause with respect to , we have and therefore defeats . Similarly, we may identify the defeat relation between other arguments.

Figure 1: An example of an AAF in situation . Since defeats and , the defeat relation between defeated arguments is omitted.

Third, given an AAF, the notion of argumentation semantics in [Dung1995] can be used to evaluate the status of arguments. There are a number of argumentation semantics capturing different intuitions and constraints for evaluating the status of arguments in an AAF, including complete, preferred, grounded and stable, etc. First, a complete extension is defined in terms of the notions of conflict-freeness and defense. Given an AAF , we say that a subset is conflict-free if and only if there exist no such that defeats ; defends an argument if and only if for every argument if defeats then there exists such that defeats . Set is admissible if and only if it is conflict-free and defends each argument in . Then, we say that is a complete extension if and only if is admissible and each argument in defended by is in ; is a preferred extension if and only if is a maximal complete extension with respect to set-inclusion; is the grounded extension if and only if is a minimal complete extension with respect to set-inclusion; is a stable extension if and only if defeats each argument in . It has been verified that each AAF has a unique (possibly empty) set of grounded extension, while many AAFs may have multiple sets of extensions, which may be used to capture the intuition that there can be multiple solutions in some decision-making scenarios. When an AAF is acyclic, it has only one extension under all semantics. Then, we say that an argument of an AAF is skeptically justified under a given semantics if it is in every extension of the AAF, and credulously justified if it is in at least one but not all extensions of the AAF. For convenience, we use to denote an argumentation semantics, which can be complete, grounded, stable or preferred. For more information about argumentation semantics, please refer to [Baroni, Caminada, and Giacomin2011].

Example 9 (Argumentation semantics)

Consider the AF in Figure 1. It is acyclic and has only one extension under any argumentation semantics , i.e., . In this example, all arguments in are skeptically justified.

Given a set of justified arguments, we may define the set of justified conclusions as follows.

Definition 16 (Justified conclusion)

Let be an AAF, and be a skeptically (credulously) justified argument under an argumentation semantics . A skeptically (credulously) justified conclusion is written as . We say that is a skeptically (credulously) justified action if and only if is an action argument.

Example 10 (Justified conclusions)

According to Example 9, all elements in are justified conclusions, in which or is a justified action.

Now, let us verify that the representation by using argumentation-based approach is sound and complete under stable semantics.

Proposition 3 (Soundness and completeness of repre.)

Let be a VDA, where , . Let be an action. Given a situation and an action matrix , it holds that is a solution of with respect to , if and only if is a justified action in under stable semantics.

Proof 1

On the one hand, if is a solution of with respect to , then there exists an ordering over , such that is the first action of the sorted list. It follows that for each , either or and are not comparable. According to Definition 11, there exist such that and . It follows that defeats , or and defeat each other. In either case, any argument defeating is defeated by . It follows that is a stable extension. So, is a justified action in under stable semantics. On the other hand, if is a justified action in under stable semantics, then there exists such that . Since in situation , any two different action arguments attack each other, no two action arguments can be in the same extension. So, is a stable extension of . According to the definition of a stable extension, defeats each action argument in . In other words, for every action argument such that , for some . So, one may construct an ordering of actions such that is the first action in the sorted list. Therefore, is a solution of with respect to .

Based on arguments, defeat relation between arguments, and a set of justified conclusions, the explanation of choosing an action includes the following perspectives: state of the world (the premise of the accepted action argument), satisfied duty (in the conclusion of the accepted action argument), overturned actions and underlying reasons (the conclusions of defeated arguments, differentials of duty satisfaction/violation in the clause of a principle associated with each defeat). We use the following example to illustrate an explanation, while the formal definition is omitted.

Example 11 (Explanation)

According to Examples 8 and 10, the explanation of the justified action is as follows.

Action is selected, because:

1.

Supporting argument is justified, with

-

perception battery low () in the premise

-

action in the conclusion

-

maximal duty satisfaction in

2.

All conflicting arguments are rejected, with action more ethically preferable than

-

in that is defeated by w.r.t.

-

in that is defeated by w.r.t.

-

in that is defeated by w.r.t.

-

in that is defeated by w.r.t.

-

in that is defeated by w.r.t.

Discussion: Justification and explanation in more sophisticated autonomous agents

In the previous two sections, we have introduced an argumentation-based formalism for representation, justification and explanation in a VDA. In the current version of VDA [Anderson, Anderson, and Berenz2019], we have not taken into consideration some more complicated scenarios. For instance, a VDA may not only have practical reasoning concerning which action should be selected, but also have epistemic reasoning about the state of the world. In this new scenario, besides a set perceptions, we may add epistemic rules, which could be strict or defeasible, to represent the knowledge of the world.

Definition 17 (Epistemic rule)

Let be a value driven agent, where . An epistemic rule of is either a strict rule or a defeasible rule , where are literals.

Example 12 (Epistemic rule)

Assume that the message concerning whether the battery is low is obtained by observation. In this sense, the elements in are not all facts. Instead, they can be assumptions. For instance, now we consider as an assumption, denoting that the battery is presumably low by observation. Now, we assume that signal of battery is found abnormal (

). In this case, we may infer that the battery is probably not low. We use a defeasible rule to represent this piece of knowledge:

.

Due to incomplete and uncertain information, there may be several possible situations. According to Example 12, there are two possible situations and . To justify which situation holds, one may construct an AAF at the epistemic level, visualized in the left part of Figure 2, in which two additional arguments are as follows. We assume that is superior to the assumption .

:

:

By using the AAF in left part of Figure 2, we may justify that holds. Under this situation, the action matrix is as follows.

.

.

.

.

.

.

Similar to the previous examples, an AAF for practical reasoning under situation is visualized in the right part of Figure 2. Similarly, defeat relation between defeated arguments is omitted.

Figure 2: AAFs for epistemic and practical reasoning

In this updated scenario, the justified action is that is supported by the justified argument . Now, let us present an updated explanation for the new justified action.

Example 13 (Updated explanation)

The explanation of the justified action is as follows.

Action is selected, because:

1.

Argument supporting situation is justified, with in the conclusion of .

2.

Supporting argument is justified, with battery not low () in the premise, action in the conclusion, and maximal duty satisfaction in the conclusion.

3.

All conflicting action arguments are rejected, with action more ethically preferable than

-

in that is defeated by w.r.t.

-

in that is defeated by w.r.t.

-

in that is defeated by w.r.t.

-

in that is defeated by w.r.t.

-

in that is defeated by w.r.t.

From the above examples, we may observe that for more sophisticated autonomous agents, there may be more than one type of reasoning, and different components of a system may be entangled. One example in this direction is the BOID architecture introduced in [Broersen et al.2001]. Furthermore, in a multi-agent system, reasoning about actions of one agent is dependent both on the individual values of the agent concerned and on what others choose to do [Atkinson and Bench-Capon2018].

Conclusions

In this paper, we have proposed an argumentation-based approach for representation, justification and explanation of a VDA. The contributions are three-fold. First, we provide a formalism to represent a VDA, making explicit some implicit knowledge. This lays a foundation for the justification and explanation of reasoning and decision making in a VDA. Second, we adapt existing structured and abstract argumentation theories to the setting of a decision making in a VDA, such that the priority relation and defeat relation over arguments are linked to the ethical consequences of actions that are reflected by clauses of a principle, and the justification and explanation of an action can be defined accordingly. Third, unlike existing argumentation systems where formal rules are designed in advance, in our approach, rules are generated and updated at run time by automatically translating a situation and an action matrix to a set of rules, while the priority relation between rules is also dynamically evaluated in terms of a principle. Thanks to the graphic nature of an AAF, when the system becomes more complex, there exist efficient approaches to handle the dynamics of the system, e.g., [Liao, Jin, and Koons2011, Liao2014].

Concerning future work, first, we have not identified nor formally represented the relation between a principle and a set of cases from which the principle is learned. Doing so is likely to provide further information that explains why an action is chosen in a given situation. Second, in the existing version of VDA [Anderson, Anderson, and Berenz2019], neither espitemic reasoning nor multi-agent interaction [Broersen et al.2001, Atkinson and Bench-Capon2018, Chopra et al.2018] has been considered. The addition of such extensions to the VDA will serve to extend its capabilities.

The work reported in this paper shares some similarity with the symbolic approach introduced in [Shih, Choi, and Darwiche2018], in the sense that some implicit functions of the system is made explicit by using a symbolic representation. However, rather than translating the function between a set of features and a classification, we translate several types of implicit knowledge of a VDA by a logical formalism. Other related works are those based on argumentation, e.g., [Cocarascu, Čyras, and Toni2018], in which an AAF is constructed in terms of highest ranked features. To the best of our knowledge, no work in existing literature exploits argumentation to address similar research problems in the present paper.

Clearly, formal justification and explanation of the behavior of autonomous systems enhances the transparency of such systems. Further, we contend that autonomous systems that can argue formally for their actions are more likely to engender trust in their users than systems without such a capability. That principle-based systems such as the one detailed in this paper and others (e.g. [Vanderelst and Winfield2018, Sarathy, Scheutz, and Malle2017]) seem to lend themselves readily to explanatory mechanisms adds further support for their adoption as a formalism to ensure the ethical behavior of autonomous systems.

References

  • [Anderson, Anderson, and Berenz2017] Anderson, M.; Anderson, S. L.; and Berenz, V. 2017. A value driven agent: Instantiation of a case-supported principle-based behavior paradigm. In AI, Ethics, and Society, Papers from the 2017 AAAI Workshop, San Francisco, California, USA, February 4, 2017.
  • [Anderson, Anderson, and Berenz2019] Anderson, M.; Anderson, S. L.; and Berenz, V. 2019. A value-driven eldercare robot: Virtual and physical instantiations of a case- supported principle-based behavior paradigm. Proceedings of the IEEE, DOI: 10.1109/JPROC.2018.2840045, 1–15.
  • [Atkinson and Bench-Capon2018] Atkinson, K., and Bench-Capon, T. J. M. 2018. Taking account of the actions of others in value-based reasoning. Artif. Intell. 254:1–20.
  • [Baroni, Caminada, and Giacomin2011] Baroni, P.; Caminada, M.; and Giacomin, M. 2011. An introduction to argumentation semantics. Knowledge Eng. Review 26(4):365–410.
  • [Broersen et al.2001] Broersen, J. M.; Dastani, M.; Hulstijn, J.; Huang, Z.; and van der Torre, L. W. N. 2001. The BOID architecture: conflicts between beliefs, obligations, intentions and desires. In Proceedings of the Fifth International Conference on Autonomous Agents, AGENTS 2001, Montreal, Canada, May 28 - June 1, 2001, 9–16.
  • [Chopra et al.2018] Chopra, A.; van der Torre, L.; Verhagen, H.; and Villata, S. 2018. Handbook of normative multiagent systems. College Publications.
  • [Cocarascu, Čyras, and Toni2018] Cocarascu, O.; Čyras, K.; and Toni, F. 2018.

    Explanatory predictions with artificial neural networks and argumentation.

    In Proceedings of the IJCAI/ECAI Workshop on Explainable Artificial Intelligence (XAI 2018), 26–32.
  • [Dung1995] Dung, P. M. 1995. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence 77(2):321–358.
  • [Liao, Jin, and Koons2011] Liao, B. S.; Jin, L.; and Koons, R. C. 2011. Dynamics of argumentation systems: A division-based method. Artif. Intell. 175(11):1790–1814.
  • [Liao2014] Liao, B. 2014. Efficient Computation of Argumentation Semantics. Intelligent systems series. Academic Press.
  • [Madumal et al.2018] Madumal, P.; Miller, T.; Vetere, F.; and Sonenberg, L. 2018. Towards a grounded dialog model for explainable artificial intelligence. CoRR abs/1806.08055.
  • [Modgil and Prakken2014] Modgil, S., and Prakken, H. 2014. The ASPIC framework for structured argumentation: a tutorial. Argument & Computation 5(1):31–62.
  • [Sarathy, Scheutz, and Malle2017] Sarathy, V.; Scheutz, M.; and Malle, B. 2017. Learning behavioral norms in uncertain and changing contexts. In Proceedings of the 2017 8th IEEE International Conference on Cognitive Infocommunications (CogInfoCom).
  • [Shih, Choi, and Darwiche2018] Shih, A.; Choi, A.; and Darwiche, A. 2018.

    A symbolic approach to explaining bayesian network classifiers.

    In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., 5103–5111.
  • [Vanderelst and Winfield2018] Vanderelst, D., and Winfield, A. 2018. An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research 48:56 – 66. Cognitive Architectures for Artificial Minds.
  • [Čyras, Satoh, and Toni2016] Čyras, K.; Satoh, K.; and Toni, F. 2016. Explanation for case-based reasoning via abstract argumentation. In Computational Models of Argument - Proceedings of COMMA 2016, Potsdam, Germany, 12-16 September, 2016., 243–254.