Artificial Intelligence (AI) systems are increasingly required to interact with other agents (be they human or artificial), but are still lagging in their ability to empathize with them when reasoning about their behavior. In the context of AI planning (the problem of selecting a goal-leading plan based on a high-level description of the world), an empathetic agent should be able to construct a plan that is harmonious with the goals, beliefs, values, affective state, and overall perspective of a fellow agent. Further, we are interested in facilitating the creation of empathetic agents holding a wide spectrum of roles - from passive observers, to virtual agents that offer relevant advice or act on someone’s behalf, all the way to embodied agents who can physically interact with the environment and other agents in it. To illustrate, consider Alice who lives with panic disorder and agoraphobia. Alice fears crowded places that might trigger a panic attack, and avoids busy restaurants, malls, and buses. Thus, Alice would never use public transit to get to work, despite it being the fastest way to get there. Instead, the optimal plan for her to get to work (that she can come up with on her own) addresses her fear, and would typically require her to walk instead of taking public transit, regardless of how suboptimal it might seem to an AI planning system bent on minimizing cost and time. A plan involving a crowded bus would simply not be executable by Alice. An empathetic AI that knows of an empty bus going along a similar route (which Alice believes is always crowded or simply does not know about), could recommend to Alice that she take it instead of walking, which would help her save time and would not place her on a crowded bus.
To construct an empathetic AI that is able to empathize with Alice and plan to assist her, we must provide it with a means of adopting her beliefs and affective state. To build towards this goal, our work brings together the notions of empathy and epistemic planning, which is an emerging field of research that combines AI planning and reasoning in the presence of knowledge and belief. We formalize the notion of Empathetic Planning (EmP) which builds on these concepts. EmP requires an empathizer to empathize with an empathizee in order to construct plans that are faithful to the empathizee’s view of the world. Specifically, we posit that in order to empathize with another, one must have at her avail a sufficiently rich representation and understanding of the beliefs and affective state of the agent with whom she is empathizing. Thus, some of the settings we address, involving reasoning about belief and affect, cannot directly be modelled as classical planning problems. We therefore appeal to a rich epistemic logic framework to represent the agent’s beliefs and affective state. Lastly, we propose an epistemic planning-based computational approach to solving the EmP problem, thereby enabling the use of off-the-shelf epistemic planning tools. Our approach enables a sufficiently empathetic agent to generate a plan that is at least as ‘good’ as the best plan the empathizee can generate by herself, using her own beliefs and capabilities. Finally, it is important to consider that a human’s behavior does not always expose their intentions due to misconceptions or computational limitations. We submit that empathetic agents are well-suited for distinguishing between the underlying intent of the behavior and the actual performed behavior.
Empathy is often thought of as the ability to understand and share the thoughts and feelings of another and has an extremely rich history, beginning with its philosophical foundations and leading to research in fields such as psychology, ethics, and neuroscience (e.g., [Coplan and Goldie2011, Davis2018]). Empathy has been found to have two components, an affective, low-level component, and a cognitive, high-level component, with the two being interconnected [Shamay-Tsoory2011]. The affective component allows one to share the emotional experiences of another via affective reactions to their affective states. The cognitive component utilizes cognitive and affective Theory of Mind (ToM) - the ability to represent the mental states of others - and allows one to take the perspective of another, thereby facilitating reasoning over their mental or affective state. We focus on the cognitive component of empathy, and work towards building empathetic agents that can reason about the mental and affective states of other agents. We submit that pro-social AI agents should be equipped with a means of reasoning about the affective state of humans. This type of reasoning will lead to more socially acceptable behavior, as highlighted by recent work [McDuff and Czerwinski2018]. While we do not focus on affect in this work, our framework can be flexibly extended with various models of affect. Lastly, we note that while we aim to build assistive empathetic agents that are benevolent, empathy can also facilitate malicious (or simply self-serving) motivations through manipulation. As such, the introduction of EmP encourages further exploration and discussion of these important areas.
There exists a large body of work on integrating empathy and ToM within intelligent agent systems in, e.g., psychological therapy, and intelligent tutoring [McQuiggan and Lester2009]. [Pynadath and Marsella2005] created decision-theoretic ToM agents that can reason about the beliefs and affective states of other agents. However, this work has not appealed to the computational machinery of epistemic planning. Epistemic planning is an emerging field of research which is rapidly developing (e.g., [Baral et al.2017]). For example, [Engesser et al.2017] utilized epistemic planning and perspective taking to facilitate implicit coordination between agents. While the motivations driving their work and ours overlap, their work differs from ours both computationally and conceptually. Finally, the rich body of work on Belief-Desire-Intention (BDI) has studied affect and planning in the past (e.g., [Steunebrink et al.2007]). However, BDI approaches have typically required agent plans to be specified in advance and we instead appeal to the flexibility of generative epistemic planning techniques to generate plans. Our approach is enabled by our combined knowledge of these fields of research and their decades-long development.
The main contributions of this paper are: (1) a formalization of EmP; (2) a computational realization of EmP that enables the use of existing epistemic planning tools; (3) a study which demonstrates the potential benefits of EmP in a diversity of domains.
In this section, we provide epistemic logic background and define the Multi-agent Epistemic Planning (MEP) problem. We first present the multi-agent modal logic KD45 [Fagin et al.2004] which we appeal to in our specification of EPR. Let and be finite sets of agents and atoms, respectively. The language of multi-agent modal logic is generated by the following BNF:
where , , and should be interpreted as “agent i believes .” We choose to represent the belief modality here so that we can model the false beliefs of agents. Using the equivalence , recent work on MEP has been able to capture both knowledge and beliefs. The semantics for formulae in is given by Kripke models [Fagin et al.2004] which are triplets, , containing a set of worlds, accessibility relations between the worlds for each of the agents ( , and a valuation map, . When an agent is at world , determines, given the accessibility relations in pertaning to , what worlds the agent considers possible. A formula is true in a world of a Kripke model , written , under these, inductively-defined conditions: for an atom iff ; iff ; iff both ; and , iff s.t. . We say that is satisfiable if there is a Kripke model and a world of s.t. . Further, we say that entails , written , if for any Kripke model , entails . Next, we assume some constraints on the Kripke model, with particular properties of belief, as discussed in [Fagin et al.2004]. Namely, we assume that the Kripke model is serial (), transitive () and Euclidean (), with the resulting properties of belief: i. (K - Distribution); ii. (D - Consistency); iii. (4 - Positive Introspection); and iv. (5 - Negative Introspection). These axioms, together, form the KD45 system where signifies multiple agents in the environment. Note that the formal mechanisms for planning are described in the respective papers of the various off-the-shelf epistemic planners (e.g., [Liu and Liu2018]). We provide a logical specification of the multi-agent epistemic planning problem that is not tied to a particular planner but rather provides a logical specification without embodying any of the syntactic restrictions that have been adopted by various planners.
As mentioned, we appeal to epistemic planning, which combines AI planning and reasoning over the beliefs and knowledge of agents, to formally specify EmP. We appeal to a syntactic approach to epistemic planning (as opposed to a semantic one such as [Bolander and Andersen2011]) and represent the initial Knowledge Base (KB) and other elements of the problem as arbitrary epistemic logic formulae. Further, we appeal to a multi-agent setting in order to represent the beliefs of the empathizer, the empathizee, and possibly other agents in the environment.
Definition 1 (Mep)
A multi-agent epistemic planning problem is a tuple where is the MEP domain comprising sets of atoms , actions , and agents , together with the problem instance description comprising the initial KB, , and the goal condition .
To define how an action updates the state of the world in the epistemic planning framework, we follow [Liu and Liu2018] who manipulates the KB with belief revision and update operators (e.g., [Alchourrón et al.1985]), and , respectively. A deterministic action is a pair , where is called the precondition of , is the condition of a conditional effect, and is called the effect of a conditional effect. A sensing action is a triplet , where are the precondition, the positive result, and the negative result, respectively. An action is executable wrt a KB if . Suppose some deterministic action is executable wrt a formula . huang2018general [huang2018general] formally define the progression of wrt to a deterministic action as follows: let be all the conditions of conditional effects s.t. . Then , denoted by , is a progression of wrt if , where is the effect corresponding to the condition . The progression of wrt a sensing action and positive result (resp. negative) is defined as (resp. ). Let and a sequence of actions. The progression of wrt (with sensing results for sensing actions) is inductively defined as follows: ; if , and undefined otherwise. is an empty sequence of actions. A solution to an MEP problem is an action tree branching on sensing results, such that the progression of the initial KB, , wrt each branch in the tree entails the goal .
In this section, we discuss the notion of empathy, contrast it with sympathy, and discuss possible characteristics of assistive empathetic agents. As mentioned in Section 1, inspired by the rich history of empathy and its philosophical foundations, we define empathy as the ability to understand and share the thoughts and feelings of another. When taken to the extreme, where an empathizer knows every thought and feeling of an empathizee, the above definition embodies an idealized notion of empathy which necessitates omniscience. We later discuss more pragmatic notions of empathy, which only require an empathizer to be empathetic ‘enough’ and are defined wrt a specific task. To contrast, consider the Golden Rule, which asks us to treat others as we would like to be treated. The rule assumes similarity, implying that others would like to be treated the same way we would and, therefore, does not allow for the existence of multiple perspectives, thus leading to sympathetic behavior [Bennett1979]. Empathy, on the other hand, allows an empathetic agent to experience the world from the perspective of the empathizee. In Section 6, we conduct a study and compare an empathetic agent and a sympathetic agent.
As mentioned, cognitive empathy includes both cognitive and affective components. If Emily empathizes with Alice, she should be able to reason about Alice’s affective state. There is an extremely rich body of work on theories of affect (e.g., [Lazarus1966]) and on the incorporation of these theories into computational models of affect (e.g., [Gratch2000]) Further, previous work has formalized logics of emotion for intelligent agents. E.g., steunebrink2007logic [steunebrink2007logic] defined a formal logic within a BDI framework while lorini2011logic [lorini2011logic] use a fragment of STIT logic to formalize counterfactual emotions such as regret and disappointment. Both of these logics appeal to a notion of epistemics to formalize complex emotions which are predicated on the beliefs of agents. Previous work has also quantified emotional intensity using fuzzy logic [El-Nasr et al.2000]. Thus, there exists a multitude of ways in which to integrate affect into our framework, and as part of future work we will experiment with various extensions of our approach.
Assistive Empathetic Agents
Before formalizing the notion of EmP, we discuss some of the properties we believe should characterize an assistive empathetic agent. More accurately, these are agents who wish to empathize with another, as empathy is often thought of as an ongoing process (or mountain climb [Ickes1997]). i. Need for Uncertainty We posit that an agent who wishes to empathize with another should be uncertain about the empathizee’s values (e.g., [Hadfield-Menell et al.2017]) and, importantly, about her true beliefs about the world. Since the quality of plans generated by an assistive empathetic agent is predicated on the veracity of the empathetic agent’s model of the empathizee, the empathetic agent should unceasingly strive to align itself better with her. ii. Benevolence An assistive empathetic agent should take it upon itself to benefit the empathizee in a way that is aligned with the latter’s values. iii. As mentioned, empathy can facilitate malicious intent. Thus, an assistive empathetic agent should not be Machiavellian, i.e., use deception, manipulation, and exploitation to benefit its interests. Interestingly, empathy and Machiavellianism have been found to be negatively correlated [Barnett and Thompson1985] - i.e., the will to manipulate is present in Machiavellians, but the means by which to do so are often not. Relatedly, a study conducted by botsLie2019 [botsLie2019] showed that human participants were, in general, positive towards an AI agent lying, if it was done for the ‘greater good’. Such questions should be explored further. E.g., when interacting with humans who wish themselves (or others) harm.
4 Empathetic Planning
In this section, we define the EmP problem and its solution. We will make use of the notation Obs (observer) for the empathizer and Act (actor) for the empathizee throughout the paper; both are assumed to be in the set of agents in all definitions. To be helpful, an assistive empathetic agent should be able to reason about the preferences
of the empathizee. In general, agent preferences can be specified in various ways, including a reward function over states, as is done in a Markov Decision Process (MDP), or by encoding preferences as part of a planning problem (e.g.,[Baier and McIlraith2008]). Preferences (or rewards) can also be augmented with an emotional component which can encode an aversion to negative emotions, while optimizing for positive ones [Moerland et al.2018]. To simplify the exposition, we focus on plan costs as a proxy for agent preferences. We assume in this work that we have an accurate approximation of Act’s preferences over plans (in this work, the lower the cost, the better the plan), and do not focus on the problem of acquiring a faithful approximation of an agent’s preferences.
Definition 2 (EmP)
An empathetic planning problem is a tuple , where is an MEP domain, , , and are sets of atoms, actions, and agents, respectively, is the initial KB, and is Act ’s estimated goal.
’s estimated goal.
To illustrate, we partially model the example from Section 1 as an EmP problem (with Alice as Act).
travelsBetween(alternativeBus, home, work)
Given an EmP problem, , is an assistive solution to iff solves and where is the set of optimal solutions for .
The solution to the EmP problem may vary if Obs is not trying to help Act achieve her goal (e.g., adversarial interaction as in [Freedman and Zilberstein2017]). Throughout the paper, we focus on finding assistive solutions.
We now formally define a pragmatic notion of empathy, defined wrt the EmP problem. To this end, we first discuss the notion of projection, where an agent can reason from another agent’s perspective. Let be Obs’s MEP domain and initial KB. and encode Obs’s beliefs about the world, including, importantly, its beliefs about Act’s beliefs about the world. To enable Obs to reason from Act’s perspective, we wish to project and wrt to Act. We define the projection of a formula with respect to an agent , . Given , and assuming and are in NNF form, when and is undefined otherwise. Both [Muise et al.2015] and [Engesser et al.2017] similarly define projection operators, with MuiseBFMMPS15’s syntactic approach being similar to ours, while engesser2017cooperative define a semantic equivalent. We project and the closure of (defined as ) wrt Act. Note that the closure will be infinite but for any practical computation we will limit generation of closure to the relevant subset. We project every formula in the closure of and for every we project every precondition, and for every conditional effect of we project the condition and effect (we project the positive and negative results for sensing actions). We refer to the result of the projection operation wrt and as and , respectively.
Let be an EmP problem. Let and be the sets of optimal solutions among and , respectively, where and are the sets of all solutions for and , respectively. and are Act’s true MEP domain and initial KB, which are typically not accessible to Obs. We say that Obs is selectively task-empathetic wrt to Act and iff . That is, Obs needs to be empathetic ‘enough’ in order to generate, when projecting to reason as Act, the optimal solutions that achieve and which Act can generate on her own (using only her beliefs and capabilities). Importantly, if Obs is selectively task-empathetic wrt , she will generate a plan that is at least as ‘good’ as the best solution Act can generate by herself, using her own beliefs and capabilities.
Obs’ solution could be better than Act’s solution if Obs has, for instance, additional knowledge or capabilities that Act lacks. In this case, a solution to the EmP problem, , may solve but not . Returning to our example, recall that Act avoids crowded buses and so will not board the crowded bus that goes from her home to work. Thus, the best plan Act can come up with is walking from home to work (assuming there is no alternative mode of transportation) since she believes the alternative bus is crowded. However, since Obs knows of an alternative bus that is relatively empty, a better plan would include Act taking the alternative bus. However, to make this plan executable in Act’s model, Obs must inform Act that the bus is empty (e.g., inform(Obs, Act, busInfo)). In this case, Act’s goal of getting to work, while ontic, requires epistemic actions such as informing Act of the bus’ status and the underlying MEP framework can facilitate reasoning about the required epistemic action(s) to achieve the goal. To contrast, consider Sympathetic Obs, who assumes that Act shares its model of the world. Thus, the optimal plan that solves consists of Act taking the crowded bus, which is a plan that is not executable in her true model, due to her panic disorder.
In this section, we describe how to compute a solution to the EmP problem. In our computation and evaluation, we assume that Obs is omniscient. In this case, a solution to an MEP problem ‘collapses’ to a single path, as Obs has knowledge of the results of the sensing actions. We call the sequence of actions that defines the path a plan and say that its cost is the number of actions in the sequence.
Computing a Solution to the EmP Problem
A solution to the EmP problem, , is a plan that belongs to the set of optimal plans which achieve , . While can be obtained in different ways (e.g., Act could explicitly tell Obs their goal or communicate it to a fellow agent), inferring via plan recognition (i.e., the problem of inferring an actor’s plan and goal given observations about its behavior) is of most interest to us. At an overview, by solving the recognition problem we can set to be the goal most likely being pursued by Act, given a sequence of observations. While we do not focus on the recognition component in this work, we have formalized the notion of empathetic plan recognition in an unpublished manuscript and proposed an integrative approach to empathetic planning and plan recognition. Further investigation is left to future work. To obtain a solution to the EmP problem, we solve the MEP problem by using an optimal off-the-shelf epistemic planner.
Complexity Epistemic planning has been shown to be computationally expensive (e.g., [Aucher and Bolander2013]). E.g., the encoding process in RP-MEP, the epistemic planner proposed by [Muise et al.2015] and used in this work, generates an exponential number of fluents when transforming the problem into a classical planning problem.
In our preliminary evaluation, we set out to (1) expose the diversity of tasks that can be captured by EmP (2) demonstrate that existing epistemic planners can straightforwardly be used to solve EmP problems and (3) to evaluate the benefits of our approach as assessed by humans. To this end, we constructed and encoded a diversity of domains, ran them using an off-the-shelf epistemic planner and conducted a study with human participants. For all of our experiments, we used the latest version of the epistemic planner RP-MEP [Muise et al.2015] with the Fast Downward planner [Helmert2006]
with an admissible heuristic. Note that various epistemic planners impose various restrictions on domain modeling and plan generation, and we will experiment with different planners in the future. The various scenarios used in our simulations and study represent a diversity of every-day situations which illustrate the potential benefits of our approach. Further, the scenarios involve Alice (asAct) and her virtual assistant (as Obs), including suggestions (automatically generated solutions to an EmP problem) given to Alice by Obs. We then present the results of the study. We encode all scenarios as EmP problems, including the beliefs of Obs and Act and Act’s goal, (see example in Section 4). We run RP-MEP once to compute a solution to an EmP problem.
Experimental Setup The study aims to evaluate both perceptions of the agent’s empathetic abilities and perceptions of the agent’s assistive capabilities, as assessed by humans. To test this, participants were presented with 12 planner-generated textual scenarios (some of which are presented below) of either empathetic or sympathetic agents and were asked to rate the following two claims pertaining to each scenario on a 5 point scale ranging from strongly disagree to strongly agree: (1) “The virtual assistant was able to successfully take Alice’s perspective” (reflecting our measure of participants’ perceptions of the agent’s empathetic abilities); and (2) “If I were Alice, I would find this virtual assistant helpful” (reflecting our measure of participants’ perceptions of the agent’s assistive capabilities). Scenarios with empathetic agents and scenarios with sympathetic agents were identical, except for the EmP solution generated by the agent. The sympathetic agent assumes that Alice shares its model of the world when computing a solution to the EmP problem. We had a total of 40 individuals (28 female) participate, ranging from 18 to 65 years old. Participants were recruited and completed the questionnaire via an online platform, and had no prior knowledge about the study.
Scenario 1 Alice is on a bus headed uptown but believes the bus is headed downtown. Obs suggests that Alice get off the bus and get on the correct one. As claimed in Section 1, empathetic agents are well-suited for distinguishing between the underlying intent of the behavior and the actual performed behavior. In this scenario, Obs can infer that Alice’s current plan (riding the wrong bus) will not achieve her goal of getting downtown, the underlying intent of her behavior. Scenario 2 Alice is visiting her 91-year-old grandmother, Rose. Rose cannot hear very well but feels shame when asking people to speak up. Obs detects that Alice is speaking softly and sends her a discreet message suggesting she speak louder. Alice’s goal is for her grandmother to hear her. This goal could be achieved by Rose asking Alice to speak louder. However, this plan will have a higher cost (due to Rose’s aversion to asking people to speak louder) than the plan involving Obs sending a discreet message to Alice. Scenario 3 This scenario models the example from Section 1, where Alice is trying to get to work and avoids crowded buses. Obs knows about a relatively empty bus, which would save her time, and suggests that Alice take it to work (in the sympathetic case, Obs suggests the crowded bus). Scenario 4 Alice is trying to get to her friend’s house and there are two ways leading there - one is well-lit and populated (but slower) while the other is dark and deserted. Alice prefers to feel safe when walking outside after dark. Obs suggests Alice take the well-lit route (compared to the dark route in the sympathetic case).
Results Across the 9 scenarios containing empathetic agents, 87% of participants either strongly agreed or agreed with statement (1) and 82% either strongly agreed or agreed with statement (2). Results comparing participants’ perceptions of the agent’s empathetic abilities demonstrated a statistically significant difference in ratings of scenarios containing empathetic agents (M=4.21, SD=1.01), compared to sympathetic agents (M=1.72, SD=0.73), t(39)= 12.48, p 0.01.
7 Discussion and Summary
In this work, we have introduced the notion of EmP which we formally specified by appealing to a rich epistemic logic framework and building upon epistemic planning paradigms. We proposed a computational realization of EmP as epistemic planning that enables the use of existing epistemic planners. We conducted a study which demonstrated the potential benefits of assistive empathetic agents in a diversity of scenarios as well as participants’ favorable perceptions of the empathetic agent’s assistive capabilities. It has been claimed recently that richer representational frameworks are needed to bridge some of the gaps in current virtual assistants - AI systems that frequently interact with human users [Cohen2018]. The scenarios we have presented illustrate precisely this need in the context of reasoning over the beliefs and goals of agents and our approach is well-suited to address such settings.
There is a diverse body of research related to the ideas presented here. Empathy has been incorporated into intelligent agent systems in various settings (e.g., [Aylett et al.2005]) but has not appealed to generative epistemic planning techniques. Work on BDI has explored notions of epistemic reasoning [Sindlar et al.2008] and affect [Steunebrink et al.2007], but has typically not appealed to the flexibility of AI planning to generate plans. There exist many avenues for future work such as the integration of affect into our framework. Future work could also appeal to related work which attempts to reconcile the human’s model [Chakraborti et al.2017] when providing the human with explanations (e.g., informing them of the bus’ status). Lastly, partially observable settings will be addressed, where the empathtizer may have uncertainty regarding the mental state of the empathizee, as well as the environment in general.
The authors gratefully acknowledge funding from NSERC and thank their colleague Toryn Q. Klassen for helpful discussions.
- [Alchourrón et al.1985] Carlos E Alchourrón, Peter Gärdenfors, and David Makinson. On the logic of theory change: Partial meet contraction and revision functions. The journal of symbolic logic, 50(2):510–530, 1985.
- [Aucher and Bolander2013] Guillaume Aucher and Thomas Bolander. Undecidability in epistemic planning. In IJCAI-International Joint Conference in Artificial Intelligence-2013, 2013.
- [Aylett et al.2005] Ruth S Aylett, Sandy Louchart, Joao Dias, Ana Paiva, and Marco Vala. FearNot!–An Experiment in Emergent Narrative. In International Workshop on Intelligent Virtual Agents, pages 305–316. Springer, 2005.
- [Baier and McIlraith2008] Jorge A. Baier and Sheila A. McIlraith. Planning with Preferences. AI Magazine, 29(4):25–25, 2008.
- [Baral et al.2017] Chitta Baral, Thomas Bolander, Hans van Ditmarsch, and Sheila McIlrath. Epistemic planning (dagstuhl seminar 17231). In Dagstuhl Reports, volume 7. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2017.
- [Barnett and Thompson1985] Mark A Barnett and Shannon Thompson. The Role of Perspective Taking and Empathy in Children’s Machiavellianism, Prosocial Behavior, and Motive for Helping. The journal of genetic psychology, 146(3):295–305, 1985.
- [Bennett1979] Milton J Bennett. Overcoming the Golden Rule: Sympathy and Empathy. Annals of the International Communication Association, 3(1):407–422, 1979.
- [Bolander and Andersen2011] Thomas Bolander and Mikkel Birkegaard Andersen. Epistemic planning for single-and multi-agent systems. Journal of Applied Non-Classical Logics, 21(1):9–34, 2011.
- [Chakraborti and Kambhampati2019] Tathagata Chakraborti and Subbarao Kambhampati. (When) Can AI Bots Lie? In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. ACM, 2019.
- [Chakraborti et al.2017] Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang, and Subbarao Kambhampati. Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy. arXiv preprint arXiv:1701.08317, 2017.
- [Cohen2018] Philip R Cohen. Back to the future for dialogue research: A position paper. arXiv preprint arXiv:1812.01144, 2018.
- [Coplan and Goldie2011] Amy Coplan and Peter Goldie. Empathy: Philosophical and Psychological Perspectives. Oxford University Press, 2011.
- [Davis2018] Mark H Davis. Empathy: A Social Psychological Approach. Routledge, 2018.
- [El-Nasr et al.2000] Magy Seif El-Nasr, John Yen, and Thomas R Ioerger. Flame—fuzzy Logic Adaptive Model of Emotions. Autonomous Agents and Multi-agent systems, 3(3):219–257, 2000.
- [Engesser et al.2017] Thorsten Engesser, Thomas Bolander, Robert Mattmüller, and Bernhard Nebel. Cooperative epistemic multi-agent planning for implicit coordination. arXiv preprint arXiv:1703.02196, 2017.
- [Fagin et al.2004] Ronald Fagin, Joseph Y Halpern, Yoram Moses, and Moshe Vardi. Reasoning about knowledge. MIT press, 2004.
- [Freedman and Zilberstein2017] Richard G Freedman and Shlomo Zilberstein. Integration of Planning with Recognition for Responsive Interaction Using Classical Planners. In AAAI, pages 4581–4588, 2017.
- [Gratch2000] Jonathan Gratch. Emile: Marshalling passions in training and education. In Proceedings of the fourth international conference on Autonomous agents, pages 325–332. ACM, 2000.
- [Hadfield-Menell et al.2017] Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell. The Off-Switch Game. In Workshops at the Thirty-First AAAI Conference on Artificial Intelligence, 2017.
- [Helmert2006] Malte Helmert. The Fast Downward Planning System. Journal of Artificial Intelligence Research, 26:191–246, 2006.
- [Ickes1997] William John Ickes. Empathic Accuracy. Guilford Press, 1997.
- [Lazarus1966] Richard S Lazarus. Psychological stress and the coping process. McGraw-Hill, 1966.
- [Liu and Liu2018] Qiang Liu and Yongmei Liu. Multi-agent Epistemic Planning with Common Knowledge. In IJCAI, pages 1912–1920, 2018.
- [Lorini and Schwarzentruber2011] Emiliano Lorini and François Schwarzentruber. A Logic for Reasoning about Counterfactual Emotions. Artificial Intelligence, 175(3-4):814–847, 2011.
- [McDuff and Czerwinski2018] Daniel McDuff and Mary Czerwinski. Designing Emotionally Sentient Agents. Communications of the ACM, 61(12):74–83, 2018.
- [McQuiggan and Lester2009] Scott W McQuiggan and James C Lester. Modelling affect expression and recognition in an interactive learning environment. International Journal of Learning Technology, 4(3-4):216–233, 2009.
[Moerland et al.2018]
Thomas M Moerland, Joost Broekens, and Catholijn M Jonker.
Emotion in Reinforcement Learning Agents and Robots: A Survey.Machine Learning, 107(2):443–480, 2018.
- [Muise et al.2015] Christian J. Muise, Vaishak Belle, Paolo Felli, Sheila A. McIlraith, Tim Miller, Adrian R. Pearce, and Liz Sonenberg. Planning Over Multi-Agent Epistemic States: A Classical Planning Approach. In Proc. of the 29th National Conference on Artificial Intelligence (AAAI), pages 3327–3334, 2015.
- [Pynadath and Marsella2005] David V Pynadath and Stacy C Marsella. Psychsim: Modeling theory of mind with decision-theoretic agents. In IJCAI, volume 5, pages 1181–1186, 2005.
- [Shamay-Tsoory2011] Simone G Shamay-Tsoory. The Neural Bases for Empathy. The Neuroscientist, 17(1):18–24, 2011.
- [Sindlar et al.2008] Michal P Sindlar, Mehdi M Dastani, Frank Dignum, and John-Jules Ch Meyer. Mental state abduction of BDI-based agents. In International Workshop on Declarative Agent Languages and Technologies, pages 161–178. Springer, 2008.
- [Steunebrink et al.2007] Bas R Steunebrink, Mehdi Dastani, John-Jules Ch Meyer, et al. A logic of emotions for intelligent agents. In Proceedings of the National Conference on Artificial Intelligence, volume 22, page 142. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2007.