Tim Miller

is this you? claim profile

0

  • Explanation in Artificial Intelligence: Insights from the Social Sciences

    There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that, if these techniques are to succeed, the explanations they generate should have a structure that humans accept. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a `good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.

    06/22/2017 ∙ by Tim Miller, et al. ∙ 0 share

    read it

  • Social planning for social HRI

    Making a computational agent 'social' has implications for how it perceives itself and the environment in which it is situated, including the ability to recognise the behaviours of others. We point to recent work on social planning, i.e. planning in settings where the social context is relevant in the assessment of the beliefs and capabilities of others, and in making appropriate choices of what to do next.

    02/21/2016 ∙ by Liz Sonenberg, et al. ∙ 0 share

    read it

  • Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences

    In his seminal book `The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity' [2004, Sams Indianapolis, IN, USA], Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge of design decisions, rather than interaction designers. As a result, programmers design software for themselves, rather than for their target audience, a phenomenon he refers to as the `inmates running the asylum'. This paper argues that explainable AI risks a similar fate. While the re-emergence of explainable AI is positive, this paper argues most of us as AI researchers are building explanatory agents for ourselves, rather than for the intended users. But explainable AI is more likely to succeed if researchers and practitioners understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science, and if evaluation of these models is focused more on people than on technology. From a light scan of literature, we demonstrate that there is considerable scope to infuse more results from the social and behavioural sciences into explainable AI, and present some key results from these fields that are relevant to explainable AI.

    12/02/2017 ∙ by Tim Miller, et al. ∙ 0 share

    read it

  • Towards a Grounded Dialog Model for Explainable Artificial Intelligence

    To generate trust with their users, Explainable Artificial Intelligence (XAI) systems need to include an explanation model that can communicate the internal decisions, behaviours and actions to the interacting humans. Successful explanation involves both cognitive and social processes. In this paper we focus on the challenge of meaningful interaction between an explainer and an explainee and investigate the structural aspects of an explanation in order to propose a human explanation dialog model. We follow a bottom-up approach to derive the model by analysing transcripts of 398 different explanation dialog types. We use grounded theory to code and identify key components of which an explanation dialog consists. We carry out further analysis to identify the relationships between components and sequences and cycles that occur in a dialog. We present a generalized state model obtained by the analysis and compare it with an existing conceptual dialog model of explanation.

    06/21/2018 ∙ by Prashan Madumal, et al. ∙ 0 share

    read it

  • Contrastive Explanation: A Structural-Model Approach

    The topic of causal explanation in artificial intelligence has gathered interest in recent years as researchers and practitioners aim to increase trust and understanding of intelligent decision-making and action. While different sub-fields have looked into this problem with a sub-field-specific view, there are few models that aim to capture explanation in AI more generally. One general model is based on structural causal models. It defines an explanation as a fact that, if found to be true, would constitute an actual cause of a specific event. However, research in philosophy and social sciences shows that explanations are contrastive: that is, when people ask for an explanation of an event -- the fact --- they (sometimes implicitly) are asking for an explanation relative to some contrast case; that is, "Why P rather than Q?". In this paper, we extend the structural causal model approach to define two complementary notions of contrastive explanation, and demonstrate them on two classical AI problems: classification and planning. We believe that this model can be used to define contrastive explanation of other subfield-specific AI models.

    11/07/2018 ∙ by Tim Miller, et al. ∙ 0 share

    read it

  • Emotionalism within People-Oriented Software Design

    In designing most software applications, much effort is placed upon the functional goals, which make a software system useful. However, the failure to consider emotional goals, which make a software system pleasurable to use, can result in disappointment and system rejection even if utilitarian goals are well implemented. Although several studies have emphasized the importance of people's emotional goals in developing software, there is little advice on how to address these goals in the software system development process. This paper proposes a theoretically-sound and practical method by combining the theories and techniques of software engineering, requirements engineering, and decision making. The outcome of this study is the Emotional Goal Systematic Analysis Technique (EG-SAT), which facilitates the process of finding software system capabilities to address emotional goals in software design. EG-SAT is easy to learn and easy to use technique that helps analysts to gain insights into how to address people's emotional goals. To demonstrate the method in use, a two-part evaluation is conducted. First, EG-SAT is used to analyze the emotional goals of potential users of a mobile learning application that provides information about low carbon living for tradespeople and professionals in the building industry in Australia. The results of using EG-SAT in this case study are compared with a professionally-developed baseline. Second, we ran a semi-controlled experiment in which 12 participants were asked to apply EG-SAT and another technique on part of our case study. The outcomes show that EG-SAT helped participants to both analyse emotional goals and gain valuable insights about the functional and non-functional goals for addressing people's emotional goals.

    10/30/2018 ∙ by Mohammadhossein Sherkat, et al. ∙ 0 share

    read it

  • What you get is what you see: Decomposing Epistemic Planning using Functional STRIPS

    Epistemic planning --- planning with knowledge and belief --- is essential in many multi-agent and human-agent interaction domains. Most state-of-the-art epistemic planners solve this problem by compiling to propositional classical planning, for example, generating all possible knowledge atoms, or compiling epistemic formula to normal forms. However, these methods become computationally infeasible as problems grow. In this paper, we decompose epistemic planning by delegating reasoning about epistemic formula to an external solver. We do this by modelling the problem using functional STRIPS, which is more expressive than standard STRIPS and supports the use of external, black-box functions within action models. Exploiting recent work that demonstrates the relationship between what an agent `sees' and what it knows, we allow modellers to provide new implementations of externals functions. These define what agents see in their environment, allowing new epistemic logics to be defined without changing the planner. As a result, it increases the capability and flexibility of the epistemic model itself, and avoids the exponential pre-compilation step. We ran evaluations on well-known epistemic planning benchmarks to compare with an existing state-of-the-art planner, and on new scenarios based on different external functions. The results show that our planner scales significantly better than the state-of-the-art planner against which we compared, and can express problems more succinctly.

    03/28/2019 ∙ by Guang Hu, et al. ∙ 0 share

    read it

  • A Grounded Interaction Protocol for Explainable Artificial Intelligence

    Explainable Artificial Intelligence (XAI) systems need to include an explanation model to communicate the internal decisions, behaviours and actions to the interacting humans. Successful explanation involves both cognitive and social processes. In this paper we focus on the challenge of meaningful interaction between an explainer and an explainee and investigate the structural aspects of an interactive explanation to propose an interaction protocol. We follow a bottom-up approach to derive the model by analysing transcripts of different explanation dialogue types with 398 explanation dialogues. We use grounded theory to code and identify key components of an explanation dialogue. We formalize the model using the agent dialogue framework (ADF) as a new dialogue type and then evaluate it in a human-agent interaction study with 101 dialogues from 14 participants. Our results show that the proposed model can closely follow the explanation dialogues of human-agent conversations.

    03/05/2019 ∙ by Prashan Madumal, et al. ∙ 0 share

    read it

  • Explainable Reinforcement Learning Through a Causal Lens

    Prevalent theories in cognitive science propose that humans understand and represent the knowledge of the world through causal relationships. In making sense of the world, we build causal models in our mind to encode cause-effect relations of events and use these to explain why new events happen. In this paper, we use causal models to derive causal explanations of behaviour of reinforcement learning agents. We present an approach that learns a structural causal model during reinforcement learning and encodes causal relationships between variables of interest. This model is then used to generate explanations of behaviour based on counterfactual analysis of the causal model. We report on a study with 120 participants who observe agents playing a real-time strategy game (Starcraft II) and then receive explanations of the agents' behaviour. We investigated: 1) participants' understanding gained by explanations through task prediction; 2) explanation satisfaction and 3) trust. Our results show that causal model explanations perform better on these measures compared to two other baseline explanation models.

    05/27/2019 ∙ by Prashan Madumal, et al. ∙ 0 share

    read it