Engaging in Dialogue about an Agent's Norms and Behaviors

11/01/2019
by   Daniel Kasenberg, et al.
0

We present a set of capabilities allowing an agent planning with moral and social norms represented in temporal logic to respond to queries about its norms and behaviors in natural language, and for the human user to add and remove norms directly in natural language. The user may also pose hypothetical modifications to the agent's norms and inquire about their effects.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/01/2019

Robby is Not a Robber (anymore): On the Use of Institutions for Learning Normative Behavior

Future robots should follow human social norms in order to be useful and...
04/02/2020

Improving Confidence in the Estimation of Values and Norms

Autonomous agents (AA) will increasingly be interacting with us in our d...
10/28/2019

Tractability properties of the discrepancy in Orlicz norms

We show that the minimal discrepancy of a point set in the d-dimensional...
04/07/2014

Thou Shalt is not You Will

In this paper we discuss some reasons why temporal logic might not be su...
11/11/2017

Enabling Reasoning with LegalRuleML

In order to automate verification process, regulatory rules written in n...
11/01/2019

Generating Justifications for Norm-Related Agent Decisions

We present an approach to generating natural language justifications of ...
09/19/2021

"Don't Downvote A$$$$$$s!!": An Exploration of Reddit's Advice Communities

Advice forums are a crowdsourced way to reinforce cultural norms and mor...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and Related Work

Explainable planning Fox et al. (2017) emphasizes the need for developing artificial agents which can explain their decisions to humans. Understanding how and why an agent made certain decisions can facilitate human-agent trust Lomas et al. (2012); Wang et al. (2016); Garcia et al. (2018).

At the same time, the field of machine ethics emphsizes developing artificial agents capable of behaving ethically. Malle and Scheutz (2014) have argued that artificial agents ought to obey human moral and social norms (rules that humans both obey and expect others to obey), and to communicate in terms of these norms. Some have argued in favor of using temporal logic to represent agent objectives, including moral and social norms (e.g. Arnold et al., 2017; Camacho and Mcilraith, 2019

), in particular arguing that it can capture complex goals while remaining interpretable in a way that other methods (e.g. reinforcement learning) are not. Nevertheless, explaining behavior in terms of temporal logic norms has been little considered (though see

Raman et al., 2016).

In this paper we consider an artificial agent planning to maximally satisfy some set of moral and social norms, represented in an object-oriented temporal logic. We present a set of capabilities for such an agent to respond to a human user’s queries as well as to commands adding and removing norms, both actually and hypothetically (and thus taking a step toward two-way model reconciliation Chakraborti et al. (2017), in which agent and human grow to better understand each other’s models and values).

2 Contribution

Our system enables an agent planning with norms specified in an object-oriented temporal logic called violation enumeration language (VEL) to explain its norms and its behavior to a human user; the user may also directly modify the agent’s norms via natural language (both really and hypothetically). While the planner and the system used to generate the (non-NL) can handle a broad subset of VEL statements, our natural language systems currently only handle a subset of VEL specified according to the following grammar:

That is, the temporal logic statements may have quantification over variables, but must consist of one temporal operator, (“always”) or (“eventually”, usually implicit in the NL input), whose argument is a (possibly negated) conjunction of (possibly negated) atoms. Each atom consists of a predicate with at most one argument.

The natural language understanding (NLU) capabilities were implemented by using a combinatory categorial grammar (CCG; Steedman and Baldridge, 2011) parser for semantic parsing into a predicate format, and then additional processing to map complex commands or queries (e.g., “leave the store while holding something”) into the temporal logic representation used by the planning agent. This is an extension of the DIARC natural language understanding system Cantrell et al. (2010)

. The natural language generation (NLG) process, and in particular how our system generates fairly natural-sounding utterances from temporal logic statements, is described in

Kasenberg et al., 2019. This paper extends that work by (1) adding the NLU capabilities; and (2) adding norm addition/removal and “suppose” hypotheticals about the agent’s norms.

To demonstrate our system, we suppose that the agent is a robot has gone shopping on behalf of a human user to a store that sells a pair of glasses and a watch. The user wants both the glasses and the watch, and has given the robot a rule to obtain as many objects as possible (“leave the store while holding everything”). However, the user has only given the robot enough money for one item. The robot could steal items by picking them up and leaving the store without paying for them, but it also has a rule against doing so. We assume that the moral rule against stealing dominates the rule in favor of obtaining everything.

Our system supports factual queries, such as asking the agent about its norms, its behavior, and what rules it broke. In addition, the user may ask “why” questions about the agent’s behavior (e.g. “why didn’t you buy everything”), and the agent may respond by constructing a counterfactual behavior trajectory in which the premise is false, and comparing it to the agent’s real behavior in terms of its adherence to the norms. See Figure 1 for an example of such a “why” question in the shopping domain, as well as supported follow-up questions.

HumanWhy did you not leave the store while holding everything? AgentI could have left the store while holding everything but that would have broken more important rules. HumanHow would you have done that? AgentI would have picked up the glasses, picked up the watch, bought the watch and left the store. HumanWhat rules would you have broken? AgentI would have left the store while holding the glasses which I had not bought. HumanHow would that have been worse? Agent Leaving the store while holding the glasses which I have not bought is worse than not leaving the store while holding the watch.

Figure 1: “Why” query and follow-up questions

The user may directly provide new norms to the agent using one of a number of conventionalized forms: “you must/should/have to” for positive norms, “you mustn’t/shouldn’t/can’t” for negative norms. A statement of this sort prompts the agent to add a new norm, as well as its representation of which rules its actual behavior violated (see Figure 2 for an example in the shopping domain).

HumanYou must not leave the store. AgentOkay. HumanWhat rules do you follow? AgentI must not leave the store while holding anything which I have not bought, I must leave the store while holding everything, and I must not leave the store. HumanWhat rules did you break? AgentI did not leave the store while holding the watch, and I left the store.

Figure 2: Adding a norm via “you must”

The user may also remove norms using conventionalized forms such as “you don’t have to/can not” for removing positive norms, and “you may/can” for removing negative norms. The agent removes the norms and updates its representation of which rules the actual behavior violated (see Figure 3 for an example in the shopping domain).

HumanYou may leave the store. AgentOkay. HumanWhat rules do you follow? AgentI must not leave the store while holding anything which I have not bought, and I must leave the store while holding everything.

Figure 3: Removing a norm via “you may”

Finally, the user may use the keywords “suppose” or “let’s say” to ask the robot to reason about a hypothetical modification norm system (see Figure 4 for an example with a hypothetically-removed norm). Unlike when norms are actually added/removed, the agent constructs a hypothetical behavior trajectory that best violates the supposed set of norms, and can respond to follow-up questions about the hypothetical trajectory and norms. The command “make it so” allows a hypothetical norm modification to be made real (but does not make the hypothetical behavior trajectory real).

HumanSuppose you didn’t have to leave the store while holding everything. AgentOkay. HumanWhat rules would you follow? AgentI would have to not leave the store while holding anything which I have not bought. HumanWhat would you have done? AgentI would have left the store. HumanWhat rules would you have broken? AgentI would not have broken any rules. HumanMake it so. AgentOkay.

Figure 4: Hypothetical changes to the agent’s norms

3 Discussion and Conclusion

In this paper we outlined a system which, for an agent planning to maximally satisfy some set of norms specified in an object-oriented temporal logic, enables that agent to respond to natural language queries by explaining its behavior (also in natural language) in terms of those norms. The system also allows the agent to consider hypothetical modifications to its set of norms, as well as to add and remove norms directly.

The natural language capabilities depend on the agent’s norms, as well as the “why” questions and hypothetically/really added or removed norms, belonging to the small fragment of VEL described in section 2. Future work could extend the class of temporal properties which the system can specify so as to leverage more of the power of temporal logic in describing complex objectives.

Our approach currently assumes that newly-added norms take priority over previous norms. Future work could relax this assumption, e.g. by allowing the agent to present its hypothetical behavior if the norm were added at different priorities, and ask for input on which would be best.

Our approach also requires users to specify exactly any norms they want removed; future work could allow approximate matching of norms to remove, or possibly support clarification questions if the agent is uncertain which of its norms the user wants removed. Another interesting topic is ensuring that norms cannot be arbitrarily added or removed by possibly-malicious users (e.g., by only allowing trusted users to remove norms, and possibly making some moral norms irremovable).

4 Acknowledgements

This project was supported in part by ONR MURI grant N00014-16-1-2278 and NSF IIS grant 1723963.

References

  • Arnold et al. (2017) Thomas Arnold, Daniel Kasenberg, and Matthias Scheutz. 2017. Value alignment or misalignment–what will keep systems accountable? In 3rd International Workshop on AI, Ethics, and Society.
  • Camacho and Mcilraith (2019) Alberto Camacho and Sheila A Mcilraith. 2019. Learning Interpretable Models Expressed in Linear Temporal Logic. In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS).
  • Cantrell et al. (2010) Rehj Cantrell, Matthias Scheutz, Paul Schermerhorn, and Xuan Wu. 2010. Robust spoken instruction understanding for HRI. In Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction, pages 275–282. IEEE Press.
  • Chakraborti et al. (2017) Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang, and Subbarao Kambhampati. 2017. Plan explanations as model reconciliation: Moving beyond explanation as soliloquy. In

    Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence

    , pages 156–163.
  • Fox et al. (2017) Maria Fox, Derek Long, and Daniele Magazzeni. 2017. Explainable planning. In Proceedings of the IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI).
  • Garcia et al. (2018) Francisco Javier Chiyah Garcia, David A. Robb, Xingkun Liu, Atanas Laskov, Pedro Patrón, and Helen F. Hastie. 2018. Explain yourself: A natural language interface for scrutable autonomous robots. In Proceedings of the Explainable Robotic Systems Workshop, HRI ’18, volume abs/1803.02088.
  • Kasenberg et al. (2019) Daniel Kasenberg, Antonio Roque, Ravenna Thielstrom, Meia Chita-Tegmark, and Matthias Scheutz. 2019. Generating justifications for norm-related agent decisions. In Proceedings of the 12th International Conference on Natural Language Generation.
  • Lomas et al. (2012) Meghann Lomas, Robert Chevalier, Ernest Vincent Cross, II, Robert Christopher Garrett, John Hoare, and Michael Kopack. 2012. Explaining robot actions. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, pages 187–188, New York, NY, USA. ACM.
  • Malle and Scheutz (2014) Bertram F Malle and Matthias Scheutz. 2014. Moral competence in social robots. In Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology, page 8. IEEE Press.
  • Raman et al. (2016) Vasumathi Raman, Cameron Finucane, Hadas Kress-Gazit, Mitch Marcus, Constantine Lignos, and Kenton C. T. Lee. 2016. Sorry Dave, I’m Afraid I Can’t Do That: Explaining Unachievable Robot Tasks Using Natural Language. In Robotics: Science and Systems IX.
  • Steedman and Baldridge (2011) Mark Steedman and Jason Baldridge. 2011. Combinatory categorial grammar. Non-Transformational Syntax: Formal and explicit models of grammar, pages 181–224.
  • Wang et al. (2016) Ning Wang, David V. Pynadath, and Susan G. Hill. 2016. Trust calibration within a human-robot team: Comparing automatically generated explanations. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pages 109–116, Piscataway, NJ, USA. IEEE Press.