JEDAI Explains Decision-Making AI

10/31/2021
by   Trevor Angle, et al.
Arizona State University
0

This paper presents JEDAI, an AI system designed for outreach and educational efforts aimed at non-AI experts. JEDAI features a novel synthesis of research ideas from integrated task and motion planning and explainable AI. JEDAI helps users create high-level, intuitive plans while ensuring that they will be executable by the robot. It also provides users customized explanations about errors and helps improve their understanding of AI planning as well as the limits and capabilities of the underlying robot system.

READ FULL TEXT VIEW PDF

Authors

page 1

02/26/2020

The Emerging Landscape of Explainable AI Planning and Decision Making

In this paper, we provide a comprehensive outline of the different threa...
07/20/2012

Motion Planning Of an Autonomous Mobile Robot Using Artificial Neural Network

The paper presents the electronic design and motion planning of a robot ...
06/21/2017

Combined Task and Motion Planning as Classical AI Planning

Planning in robotics is often split into task and motion planning. The h...
01/26/2022

Cybertrust: From Explainable to Actionable and Interpretable AI (AI2)

To benefit from AI advances, users and operators of AI systems must have...
12/02/2021

On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System

Explainable AI (XAI) research has been booming, but the question "To who...
07/27/2020

Explainable AI based Interventions for Pre-season Decision Making in Fashion Retail

Future of sustainable fashion lies in adoption of AI for a better unders...
08/10/2020

Proof-Carrying Plans: a Resource Logic for AI Planning

Recent trends in AI verification and Explainable AI have raised the ques...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

AI systems are increasingly common in everyday life, where they can be used by laypersons who may not understand how autonomous systems work or what they can and cannot do. This problem is particularly salient in cases of taskable AI systems whose functionality can change based on the tasks they are performing. In this work, we present an AI system JEDAI that can be used in outreach and educational efforts to help laypersons learn how to provide AI systems with new tasks, debug such systems, and understand their capabilities.

Three key technical challenges are addressed by the research ideas brought together in JEDAI: (i) abstracting a robot’s functionalities into high-level actions (capabilities) that the user can more easily understand; (ii) converting the user-understandable capabilities into low-level motion plans that a robot can execute; and (iii) explaining errors in a manner sensitive to the user’s current level of knowledge so as to make the robot’s capabilities and limitations clear.

Figure 1: JEDAI system with a Blockly-based plan creator on the left and a simulator window on the right.

JEDAI utilizes recent work in explainable AI and integrated task and motion planning to address these challenges and provides a simple interface to support accessibility. Users need to select a domain and an associated task, after which they can create a plan consisting of high-level actions (Fig. 1 left) to complete the task. The user puts together a plan in a drag-and-drop workspace, built with the Blockly visual programming library Google (2017). JEDAI validates this plan using the Hierarchical Expertise Level Modeling algorithm (HELM) Sreedharan et al. (2018, 2021). If the plan contains any errors, HELM computes a user-specific explanation of why the plan would fail. JEDAI converts such explanations to natural language, thus helping to identify and fix any gaps in the user’s understanding. On the other hand, if the plan given by the user is a correct solution to the current task, JEDAI uses a task and motion planner ATM-MDP Shah et al. (2020); Shah and Srivastava (2021) to convert the high-level plan, that the user understands, to a low-level motion plan that the robot can execute. The execution of this low-level motion plan by the robot is shown to the user in a simulated environment (Fig. 1 right).

Prior work on the topic includes approaches that solve the three technical challenges mentioned earlier in isolation. This includes tools for: providing visualizations or animations of standard planning domains Magnaguagno et al. (2017); Chen et al. (2019); Aguinaldo and Regli (2021); Dvorak et al. (2021); De Pellegrin and Petrick (2021); Roberts et al. (2021); making it easier for non-expert users to program robots with low-level actions Krishnamoorthy and Kapila (2016); Weintrop et al. (2018); Huang et al. (2020); Winterer et al. (2020); and generating explanations for plans provided by the users Grover et al. (2020); Karthik et al. (2021); Brandao et al. (2021). In addition, none of these works makes the instructions easier for the user, have the ability to automatically compute user aligned explanations, and work with real robots (or their simulators) at the same time. JEDAI addresses all three challenges in tandem by using 3D simulations for domains with real robots and their actual constraints, and providing personalized explanations that inform a user of any mistake they make while using the system.

A video demonstrating JEDAI’s working is available at: https://youtu.be/mAkd2afZMJg. We now describe JEDAI’s architecture.

2 Architecture

Figure 2: Architecture of JEDAI showing interaction between the four core components.

Fig. 2 shows the four core components of the JEDAI framework: (i) user interface, (ii) task and motion planner, (iii) personalized explanation generator, and (iv) natural language templates. We now describe each component in detail.

User interface   JEDAI’s user interface (Fig. 1) is made to be unintimidating and low-friction. The Blockly visual programming interface is used to facilitate this. JEDAI generates a separate interconnecting block for each high-level action, and action parameters are picked from drop-down selection fields that display type-consistent options for each parameter. Users can drag-and-drop these actions and select different arguments to create a high-level plan.

Personalized explanation generator   Users will sometimes make mistakes when planning, either failing to achieve goal conditions or applying actions before the necessary preconditions are satisfied. For inexperienced users in particular, these mistakes may stem from an incomplete understanding of the task’s requirements or the robot’s capabilities. JEDAI assists users in apprehending these details by providing explanations personalized to each user.

Explanations in the context of this work are of two types: (i) non-achieved goal conditions, and (ii) violation of a precondition of an action. JEDAI validates the plan submitted by the user to check if it achieves all goal conditions. If it fails to achieve any goal condition, the user is informed about it. JEDAI uses HELM to compute user-specific contrastive explanations in order to explain any unmet precondition in an action used in the user’s plan. HELM does this by using the plan submitted by the user to estimate the user’s understanding of the robot’s model and then uses the estimated model to compute the personalized explanations. In case of multiple errors in the user’s plan, HELM generates explanation for one of the errors. This is because explaining the reason for more than one errors might be unnecessary and in the worst case might leave the user feeling overwhelmed 

Miller (2019). An error is selected for explanation by HELM based on optimizing a cost function that indicates the relative difficulty of concept understandability. This cost function can be changed to reflect different users’ background knowledge.

Natural language templates   Even with a user-friendly interface and personalized explanations for errors in abstract plans, the domain model syntax used for interaction with ATM-MDP presents a significant barrier to a non-expert user trying to understand the state of an environment and the capabilities of a robot. To alleviate this, JEDAI uses a simple strategy for generating natural language to make the presentation of goals, actions, and explanations more user-friendly.

This strategy depends on the idea that when the structure of the planning formalism is known, any action or proposition can be talked about in natural language by filling in a generic syntactic template. E.g., the action “pickup (plank_i gripper_left)” can be mapped to the natural language form “pick up plank_i with the left gripper”. These templates become more complex for conjunctions of atomic propositions, but the idea remains the same. Unfortunately, each new domain requires some amount of hand-written natural language. However, this is likely unavoidable in absence of an AI system intelligent enough to autonomously form accurate but informal sentences about the ATM-MDP’s syntax.

Task and motion planner   JEDAI uses ATM-MDP to convert the high-level plan submitted by the user into sequences of low-level primitive actions that a robot can execute.

ATM-MDP uses sampling-based motion planners to provide a probabilistically complete approach to hierarchical planning. High-level plans are refined by computing feasible motion plans for each high-level action. If an action does not accept any valid refinement due to discrepancies between the symbolic state and the low-level environment, it reports the failure back to JEDAI. If all actions in the high-level plan are refined successfully, the plan’s execution is shown using the OpenRAVE simulator Diankov and Kuffner (2008).

Implementation   Any custom domain can be set up with JEDAI. We provide five built-in domains, each with one of YuMi ABB (2015) or Fetch Wise et al. (2016) robots. Each domain contains a set of problems that the users can attempt to solve and low-level environments corresponding to these problems. Source code for the framework, an already setup virtual machine, and the documentation are available at: https://github.com/aair-lab/AAIR-JEDAI.

3 Conclusions and Future Work

We demonstrated a novel AI tool JEDAI for helping people understand the capabilities of an arbitrary AI system and enabling them to work with such systems. JEDAI converts the user’s input plans to low level motion plans executable by the robot if it is correct, or explains to the user any error in the plan if it is incorrect. JEDAI works with off-the-shelf task and motion planners and explanation generators. This structure allows it to scale automatically with improvements in either of these active research areas.

In the future, JEDAI can be extended to work as an interface that makes AI systems compliant with Level II assistive AI – systems that makes it easy for operators to learn how to use them safely Srivastava (2021). Extending this tool for working in non-stationary settings, and generating natural language descriptions of predicates and actions autonomously are a few promising directions of future work.

Acknowledgements

We thank Kiran Prasad and Kyle Atkinson for help with the implementation, Sarath Sreedharan for help with setting up HELM, and Sydney Wallace for feedback on user interface design. We also thank Chirav Dave, Rushang Karia, Judith Rosenke, and Amruta Tapadiya for their work on an earlier version of the system. This work was supported in part by the NSF grants IIS 1909370, IIS 1942856, IIS 1844325, and OIA 1936997.

References

  • ABB (2015) ABB yumi - irb 14000. https://new.abb.com/products/robotics/collaborative-robots/irb-14000-yumi. Cited by: §2.
  • A. Aguinaldo and W. Regli (2021) A Graphical Model-Based Representation for Classical AI Plans using Category Theory. In ICAPS 2021 Workshop on Explainable AI Planning, Cited by: §1.
  • M. Brandao, G. Canal, S. Krivić, and D. Magazzeni (2021) Towards Providing Explanations for Robot Motion Planning. In Proc. ICRA, Cited by: §1.
  • G. Chen, Y. Ding, H. Edwards, C. H. Chau, S. Hou, G. Johnson, M. Sharukh Syed, H. Tang, Y. Wu, Y. Yan, T. Gil, and L. Nir (2019) Planimation. In ICAPS 2019 System Demonstrations, Cited by: §1.
  • E. De Pellegrin and R. P. A. Petrick (2021) PDSim: Simulating Classical Planning Domains with the Unity Game Engine. In ICAPS 2021 System Demonstrations, Cited by: §1.
  • R. Diankov and J. Kuffner (2008) OpenRAVE: A Planning Architecture for Autonomous Robotics. Technical report Technical Report CMU-RI-TR-08-34, Carnegie Mellon University, Pittsburgh, PA, USA. Cited by: §2.
  • F. Dvorak, A. Agarwal, and N. Baklanov (2021) Visual Planning Domain Design for PDDL using Blockly. In ICAPS 2021 System Demonstrations, Cited by: §1.
  • Google (2017) Blockly. GitHub. Note: https://github.com/google/blockly Cited by: §1.
  • S. Grover, S. Sengupta, T. Chakraborti, A. P. Mishra, and S. Kambhampati (2020) RADAR: Automated Task Planning for Proactive Decision Support. Human–Computer Interaction 35 (5-6), pp. 387–412. Cited by: §1.
  • G. Huang, P. S. Rao, M. Wu, X. Qian, S. Y. Nof, K. Ramani, and A. J. Quinn (2020) Vipo: Spatial-Visual Programming with Functions for Robot-IoT Workflows. In Proc. CHI, Cited by: §1.
  • V. Karthik, S. Sreedharan, S. Sengupta, and S. Kambhampati (2021) RADAR-X: An Interactive Interface Pairing Contrastive Explanations with Revised Plan Suggestions. In Proc. AAAI (Demonstrations Track), Cited by: §1.
  • S. P. Krishnamoorthy and V. Kapila (2016) Using A Visual Programming Environment and Custom Robots to Learn C Programming and K-12 STEM Concepts. In Proceedings of the 6th Annual Conference on Creativity and Fabrication in Education, Cited by: §1.
  • M. C. Magnaguagno, R. Fraga Pereira, M. D. Móre, and F. R. Meneguzzi (2017)

    WEB PLANNER: A Tool to Develop Classical Planning Domains and Visualize Heuristic State-Space Search

    .
    In ICAPS 2017 Workshop on User Interfaces and Scheduling and Planning, Cited by: §1.
  • T. Miller (2019)

    Explanation in Artificial Intelligence: Insights from the Social Sciences

    .
    Artificial Intelligence 267, pp. 1–38. Cited by: §2.
  • J. O. Roberts, G. Mastorakis, B. Lazaruk, S. Franco, A. A. Stokes, and S. Bernardini (2021)

    vPlanSim: An Open Source Graphical Interface for the Visualisation and Simulation of AI Systems

    .
    In Proc. ICAPS, Cited by: §1.
  • N. Shah, D. Kala Vasudevan, K. Kumar, P. Kamojjhala, and S. Srivastava (2020) Anytime Integrated Task and Motion Policies for Stochastic Environments. In Proc. ICRA, Cited by: §1.
  • N. Shah and S. Srivastava (2021) Anytime Stochastic Task and Motion Policies. arXiv preprint arXiv:2108.12537. Cited by: §1.
  • S. Sreedharan, S. Srivastava, and S. Kambhampati (2018) Hierarchical Expertise Level Modeling for User-Specific Contrastive Explanations. In Proc. IJCAI, Cited by: §1.
  • S. Sreedharan, S. Srivastava, and S. Kambhampati (2021) Using State Abstractions to Compute Personalized Contrastive Explanations for AI Agent Behavior. Artificial Intelligence 301, pp. 103570. Cited by: §1.
  • S. Srivastava (2021) Unifying Principles and Metrics for Safe and Assistive AI. In Proc. AAAI, Cited by: §3.
  • D. Weintrop, A. Afzal, J. Salac, P. Francis, B. Li, D. C. Shepherd, and D. Franklin (2018) Evaluating CoBlox: A Comparative Study of Robotics Programming Environments for Adult Novices. In Proc. CHI, Cited by: §1.
  • M. Winterer, C. Salomon, J. Köberle, R. Ramler, and M. Schittengruber (2020) An Expert Review on the Applicability of Blockly for Industrial Robot Programming. In Proceedings of the 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Cited by: §1.
  • M. Wise, M. Ferguson, D. King, E. Diehr, and D. Dymesich (2016) Fetch and freight: standard platforms for service robot applications. In IJCAI 2016 Workshop on Autonomous Mobile Service Robots, Cited by: §2.