How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games

11/19/2017
by   Jonathan Dodge, et al.
0

How should an AI-based explanation system explain an agent's complex behavior to ordinary end users who have no background in AI? Answering this question is an active research area, for if an AI-based explanation system could effectively explain intelligent agents' behavior, it could enable the end users to understand, assess, and appropriately trust (or distrust) the agents attempting to help them. To provide insights into this question, we turned to human expert explainers in the real-time strategy domain, "shoutcaster", to understand (1) how they foraged in an evolving strategy game in real time, (2) how they assessed the players' behaviors, and (3) how they constructed pertinent and timely explanations out of their insights and delivered them to their audience. The results provided insights into shoutcasters' foraging strategies for gleaning information necessary to assess and explain the players; a characterization of the types of implicit questions shoutcasters answered; and implications for creating explanations by using the patterns

READ FULL TEXT

page 1

page 4

page 6

page 9

page 10

research
04/05/2019

Lucid Explanations Help: Using a Human-AI Image-Guessing Game to Evaluate Machine Explanation Helpfulness

While there have been many proposals on how to make AI algorithms more t...
research
06/08/2022

Explanation as Question Answering based on a Task Model of the Agent's Design

We describe a stance towards the generation of explanations in AI agents...
research
11/21/2017

Toward Foraging for Understanding of StarCraft Agents: An Empirical Study

Assessing and understanding intelligent agents is a difficult task for u...
research
09/06/2022

Explaining Machine Learning Models in Natural Conversations: Towards a Conversational XAI Agent

The goal of Explainable AI (XAI) is to design methods to provide insight...
research
09/06/2022

"Mama Always Had a Way of Explaining Things So I Could Understand”: A Dialogue Corpus for Learning to Construct Explanations

As AI is more and more pervasive in everyday life, humans have an increa...
research
11/01/2022

Understanding the Unforeseen via the Intentional Stance

We present an architecture and system for understanding novel behaviors ...
research
01/27/2022

Diagnosing AI Explanation Methods with Folk Concepts of Behavior

When explaining AI behavior to humans, how is the communicated informati...

Please sign up or login with your details

Forgot password? Click here to reset