Explainable Agents Through Social Cues: A Review

03/11/2020 ∙ by Sebastian Wallkotter, et al. ∙ 0

How to provide explanations has experienced a surge of interest in Human-Robot Interaction (HRI) over the last three years. In HRI this is known as explainability, expressivity, transparency or sometimes legibility, and the challenge for embodied agents is that they offer a unique array of modalities to communicate this information thanks to their embodiment. Responding to this surge of interest, we review the existing literature in explainability and organize it by (1) providing an overview of existing definitions, (2) showing how explainability is implemented and how it exploits different modalities, and (3) showing how the impact of explainability is measured. Additionally, we present a list of open questions and challenges that highlight areas that require further investigation by the community. This provides the interested scholar with an overview of the current state-of-the-art.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Mr. Smith is the father of two children and works as a sales associate in a large retail store. After five years of driving his car to work every day, he decides to upgrade to a more recent and more efficient model. So he makes an appointment, and goes into his bank to get pre-approved for a car loan. To his big surprise - and to that of the bank advisor - he is denied the loan. The explanation of the advisor: ”I’m sorry Sir, the system denied your application. There is, unfortunately, nothing I can do. I am very sorry.”

Situations similar to this fictional scenario have caught a lot of media attention recently under the topic Bias in AI. While part of the problem can be attributed to the unintentional presence of non-causal correlations in the training data, another part is the amount of information given to the users of such AI systems. The above situation could be softened, if the system could provide an explanation for why it made the decision to reject the car loan. The currently emerging area of explainable AI (XAI) seeks to investigate such explanations, and asks how we can understand decision-making systems. The importance of this field is also acknowledged by the EU, stressing that explainability - sometimes called transparency - is an essential component to achieve trustworthy AI

(High-Level Expert Group on Artificial Intelligence,


The idea of explaining a system’s action (justification), however, is not new. Starting in the late 1970s scholars already began to investigate how expert systems (Warren, 1977; Scott et al., 1977; van Melle, 1978) or semantic nets (Wick and Thompson, 1992; Georgeff et al., 1999), that use classical search methods, encode their explanations in human readable form. Two prominent examples of such systems are MYCIN (van Melle, 1978), a knowledge-based consultation program for infectious disease diagnosis, and PROLOG, a logic based programming language. MYCIN is particularly interesting, because it already explored the idea of interactive design to allow both, inquiry about the decision in the form of specific questions, and rule-extraction based on previous decision criteria (van Melle, 1978).

The community around Human-Robot Interaction (HRI) is also interested in this ability to explain, and scholars have been very active in this domain over the past few years (see figure 1). Here the aim is to explain why an embodied agent executed an action in the past (justification), explain the robot’s view of the world in the present (internal state), and explain the robot’s plans for the future (intent). In HRI this is known as explainability, expressivity, or transparency, and is unique, because robots offer a larger array of modalities to communicate this information thanks to their embodiment.

In our work, we want to show how these unique modalities can be exploited for explainability, and highlight which aspects require further inquiry. As such, we contribute to the field of explainability and transparency in HRI with a review of the existing literature in explainability that organizes it by (1) providing an overview of existing definitions, (2) showing how explainability is implemented and how it exploits different modalities, and (3) how the effect of explainability is measured. This provides interested scholars with an overview of the current state-of-the-art. Additionally, we present a list of open questions and challenges that highlight areas that require further investigation by the community.

2. Current investigations into explainability in human-agent systems

Other authors have written about explainability in human-agent systems, stated their position, and reviewed some of the existing literature (Spagnolli et al., 2018; Theodorou et al., 2016; Lyons, 2013; Jacucci et al., 2014; Fischer et al., 2018; Felzmann et al., 2019).

Doshi-Velez and Kim (2017) and Lipton (2018)

sought to refine the discourse on interpretability identifying the desiderata and methods of interpretability research. Their research focused on the interpretation of machine learning systems from a human perspective and identified trust, causality, transferability, informativeness, and fair and ethical decision-making as key aspects.

Rosenfeld and Richardson (2019) provided a notation for defining explainability in relation to related terms such as interpretability, transparency, explicitness, and faithfulness. Their taxonomy encompasses the motivation behind the need of explainability (not helpful, beneficial, critical), the importance of identifying the target of the explanation (regular user, expert user, external entity), when to provide the explanation, how to measure the effectiveness of it and which interpretation of the algorithm has been used. Track et al. (2019) presented a systematic review and clustered the results in respect to the demographics, the application scenarios, the intended purpose of an explanation, and if the studies were grounded into social science or psychological background. The review summarised the methods used to implement the explanation to the user with its dynamics (context-aware, user-aware, both or none), and the types of explanation modality. Alonso and De La Puente (2018) proposed a review of the system’s transparency in a shared autonomy framework, stressing the role of transparency in flexible and efficient human-robot collaborations. Their review underlines how transparency should vary in relation to the level of system autonomy and how the exploitation of transparency mechanisms can result from an explanation, a property of an interface, or a mechanical feature.

2.1. Methodology

For this review, we chose to use keyword based search on the SCOPUS111https://www.scopus.com/ database to identify relevant literature, as this method makes our search reproducible. It also allows us to take a systematic approach to corpus generation.

First, we identified a set of relevant papers in an unstructured manner based on previous knowledge of the area. From each paper, we extracted both, the indexed and the author keywords, and rank ordered each keyword by occurrence. Using this method, we identified key search terms such as human-robot interaction, transparent, interpretable, explainable, or planning.

Keyword Description Search Term
Human Involvement exclude papers without human involvement, e.g. position papers or agent-agent interaction ( ”human-robot” OR ”child-robot” OR ”human-machine” )
Transparency ( transparen* OR interpretabl* OR explainabl* )
Transparency II ( obser* OR legib* OR visualiz* OR ( commun* AND ”non-verbal” ) )
Autonomy exclude papers that are not using an autonomous agent ( learn* OR plan* OR reason* OR navigat* OR adapt* OR personalis* OR decision-making OR autonomous )
Social Cues exclude papers that don’t have a social interaction between human and agent ( social OR interact* OR collab* OR shared OR teamwork OR ( model* AND ( mental OR mutual ) ) )
Social Agent exclude papers that don’t use a social agent ( agent* OR robot* OR machine* OR system* )
Recency Only consider the last 10 years ( LIMIT-TO ( PUBYEAR , 2019 ) OR … OR LIMIT-TO ( PUBYEAR , 2009 ) )
Subject Area Only consider papers from computer science, engineering, math, psychology, social sciences, or neuroscience ( LIMIT-TO ( SUBJAREA , ”COMP” ) OR LIMIT-TO ( SUBJAREA , ”ENGI” ) OR LIMIT-TO ( SUBJAREA , ”MATH” ) OR LIMIT-TO ( SUBJAREA , ”SOCI” ) OR LIMIT-TO ( SUBJAREA , ”PSYC” ) OR LIMIT-TO ( SUBJAREA , ”NEUR” ) )
Table 1. Inclusion Criteria and Search Terms

Next, we chose a set of inclusion and exclusion criteria to focus the review on interaction scenarios that involve humans and embodied social agents (see table 1). For each criterion we designed a search string, and then concatenated them all to obtain a final query. With it, we searched SCOPUS, obtained papers, and manually selected relevant papers from this list. To ensure reliability of the selection, the two main authors rated inclusion of each paper independently. If both agreed on the relevance, we treated the paper as relevant; similarly, if both agreed it was not relevant, we excluded it. For papers with differing opinion, we discussed their relevance and made a joint decision regarding the paper’s inclusion; This left us with papers for the final review. Visualizing the year of publication for these core papers provides additional evidence for the increasing interest in this area (figure 1).

Figure 1. Histogram of the core papers identified for this review by year of publication.

While these papers from the core of our search, we extended this list individually for each section in this work, by looking at references and citations of the papers. Our reasoning is that some papers linked to the core via citation provide a unique insight into one section of our review, but may not fall within our inclusion criteria. Hence, we choose to mention these works in the individual section - providing an encompassing overview -, but not include them in the overall discussion.

For more reproducibility, we are sharing both the initial list of included and excluded papers (with a reason why each paper was excluded), and the final list of included papers with detailed information in form of two spreadsheets as supplementary material.

3. Definition

Before starting to look into how different modalities are exploited, it is important to understand what different scholars mean when they talk about explainability. Considering the recency of the topic, a standard definition has yet to be agreed upon, and definitions are mainly driven by the main motivation of the experiment. The main motivations we found in the core papers are:

  • Interactive Machine/Robot Learning investigates the need of explainability in the context of robot learning. The main idea is that revealing the robot’s internal states allows the human teacher to provide more informative examples (Lütkebohle et al., 2009; Chao et al., 2010; Thomaz and Breazeal, 2008; Tabrez and Hayes, 2019).

  • Human Trust states that adding explainability increases human trust and system reliability. It empathizes the importance of communicating the agent’s uncertainty, incapability, and existence of internally conflicting goals (Wang et al., 2016b; Roncone et al., 2017; Kwon et al., 2018; Schaefer et al., 2016).

  • Teamwork underlines the value of explainability in human-robot collaboration scenarios to build shared mental models and predict the robot’s behaviour (Huang et al., 2019; Legg et al., 2019; Chakraborti et al., 2017; Hayes and Shah, 2017; Sciutti et al., 2014).

  • Ethical Decision-Making suggests that communicating the robot’s decision-making processes and capabilities, paired with situational awareness, increases a user’s ability to make good decisions(Poulsen et al., 2019; Kwon et al., 2018; Akash et al., 2018a).

Author(s) Definition Research Domain

Lütkebohle et al. (2009)
Structure verbal and non-verbal dialog to guide human actions Machine Teaching, Predictability, Human-Robot Collaboration

Chao et al. (2010)
Explain the robot’s internal states in order to improve the learning experience by revealing to the teacher what is known and what is unclear Machine Teaching, Predictability

Lee et al. (2013)
Develop expectancy-setting strategies and recovery strategies for forewarn people of a robot’s limitations and reduce the negative consequence of breakdowns Robot acceptance

Ososky et al. (2014)
Degree to which a system’s action, or the intention of an action, is apparent to human operators and/or observers - “able to be seen through” or “easy to notice or understand” Trust, Reliance, Human-Robot Collaboration, Operator Workload

Sciutti et al. (2014)
Convey cues about object features (e.g., weight) to the human partner using implicit communication Human-Robot Collaboration

Boyce et al. (2015)
Display transparency information (SAT model (Chen et al., 2017)) in the interface of an autonomous robot Trust

Wang et al. (2016a)
Generate explanations of the robot’s reasoning, communicate uncertainty and conflicting goals Trust, Teamwork
Perlmutter et al. (2016) Communicate robot’s internal processes with human-like verbal and non-verbal behaviors Communication, Visualization, Control

Chen et al. (2017)
Visualize intent, reasoning and predicted future states (perception, comprehension and projection) Trust, Human’s Workload

Schaefer et al. (2016)
Visualize robot’s reasoning processes and intent Human-Robot Collaboration, Trust
Hayes and Shah (2017) Share expectations and convey intentions, plans, or justifications Human-Robot Collaboration, Control, Debug

Roncone et al. (2017)
Provide appropriate measures of uncertainty for having mental models about the tasks shared between peers Trust, Proficiency, Confidence, Uncertainty, Introspection

Chakraborti et al. (2017)
Identify and reconcile the relevant differences between the humans’ and robot’s model for building optimal explanations. Reveal information regarding the future intentions of the robot at the time of task execution using projection-aware planning Communication, Human-Robot Collaboration, Impedance Mismatch

Sreedharan et al. (2017)
Produce a plan that is closer to the human’s expected plan and a plan explanation that includes correction of the belief state (goals or state information) as well as information pertaining to the action model itself Human-Aware Planning, Human-Robot Collaboration

Zhou et al. (2017)
Communicate the robot’s internal state though timing Perceived Naturalness, Human’s Learning (Task Understanding)

Kwon et al. (2018)
Express robot’s incapability and communicate both what the robot is trying to accomplish and why the robot is unable to accomplish it Robot’s Acceptance, Human-Robot Collaboration

Baraka et al. (2018)
Externalize hidden information of an agent. Express robot behaviors that have a specific communicative purpose. Human-Robot Collaboration, Control, Communication

Gong and Zhang (2018)
Signal intentions using natural language Teamwork, Human-Robot Collaboration

Lamb et al. (2018)
Implement behavioral dynamics models based on human decision-making dynamics Human-Robot Collaboration

Akash et al. (2018b, a)
Provide visual, context-dependent recommendations based on the level of trust and workload of the human Trust, Human’s Workload, Real-Time Human-Robot Collaboration, Recommendation, Visualization, Informed Judgements

Tabrez and Hayes (2019)
Anticipate, communicate, and explain justifications and rationale for AI driven behaviors via contextually appropriate semantics Trust, Machine Teaching

Legg et al. (2019)
Establish a two-way collaborative dialogue on data attributions between human and machine and express personal confidence in data attributions Active Learning, Human-Robot Collaboration

Poulsen et al. (2019)
Real-time graphical and audible representation for communicating the robot’s decision-making processes and reflecting on the robot’s real capabilities, intentions, goals and limitations Ethical Decision-Making, Trust

Huang et al. (2019)
Communicate information to correctly anticipate a robot’s behavior in novel situations and building an accurate mental model of the robot’s objective function Prediction, Human-Robot Coordination

Khoramshahi and Billard (2019)
Comply robot’s behavior with human intention, adapting generated motions (i.e., the desired velocity) to those intended by the human user Human-Robot Collaboration
Table 2. Definition ordered by time

We have aggregated the individual definitions used in each paper and the motivations behind them in table 2. Although these definitions differ among each other there are commonalities based on their motivation. Guidance and the dialog with a human tutor are aspects that turn out to be important for interactive machine/robot learning (”dialog to guide human actions” (Lütkebohle et al., 2009), ”revealing to the teacher what is known and what is unclear” (Chao et al., 2010)). Providing information about the level of uncertainty and expressing robot incapability are core concepts of explainability that enhance human trust (”communicate uncertainty” (Wang et al., 2016b), ”provide appropriate measures of uncertainty” (Roncone et al., 2017), ”express robot incapability” (Kwon et al., 2018)). The ability to anticipate a robot’s behavior and establish a two-way collaborative dialogue by identifying relevant differences between the humans’ and the robots’ model, are shared elements of the definitions around teamwork (”anticipate robot’s behavior” (Huang et al., 2019), ”establish two-way collaborative dialogue” (Legg et al., 2019), ”reconcile the relevant differences between the humans’ and robot’s model” (Chakraborti et al., 2018a), ”share expectations” (Hayes and Shah, 2017)). Authors that refer to ethical decision-making identify the communication of the intentions and context-dependent recommendations as crucial information (”robot’s real capabilities, intentions, goals and limitations” (Poulsen et al., 2019), ”context-dependent recommendations based on the level of trust and workload” (Akash et al., 2018b)).

4. Social Cues

After reviewing the definitions and main motivations of the selected papers, we want to now focus on the second aspect of our review: how different social cues are exploited to achieve explainability.

While there are many ways to organize robot action, in this section we are interested in the communicative value of them and what they reveal about the robot’s intent, internal state, and justifications. In particular, we are asking which modality is used by each author, and how explainability is achieved through them.

In this context, explainability can be seen as the expression of information about mental states and plans via social cues (Unger, 2012). We cluster these cues into three categories:

  • Speech is often used to translate the robot’s internal state into natural language (Wang et al., 2016a; Hayes and Shah, 2017), aimed to facilitate rapid fault diagnosis, and anticipate future actions. The statement might be task specific (context-dependent) (Gong and Zhang, 2018) and may be personalized to the observer (Tabrez and Hayes, 2019).

  • Gesture can be used by the robot to give feedback about its intent in a learning scenario (Lütkebohle et al., 2009; Chao et al., 2010) and might be mixed with other modalities, e.g., speech or visual animations, to communicate the target or features of a manipulation task, e.g., to express uncertainty about which type of grip is required (Lütkebohle et al., 2009; Sciutti et al., 2012). Example gestures include head-shaking to communicate (dis-)agreement, or shrugging to communicate the prediction confidence (Chao et al., 2010). Repeating a gesture and modulating its timing, can signal the perceived confidence or incapability (Kwon et al., 2018; Zhou et al., 2017). In addition, autonomous cars can manipulate their physical behavior to give information about current and alternative trajectories (Huang et al., 2019).

  • Visual Feedback is any cue that is not a gesture and that is observed through the user’s eyes. They are often motivated by a noisy environment unsuited for audio cues, or used in combination with other social cues (Roncone et al., 2017; Perlmutter et al., 2016). Blinking lights, fading animations, or progress bars, are examples of this category and can communicate the robot’s internal state or intent (Baraka et al., 2018). Visual feedback has its application in teleoperation or in industrial contexts in which the design of the task or the robot itself may lack explicit expressive channels (Boyce et al., 2015; Schaefer et al., 2016; Akash et al., 2018b).

A detailed analysis of the social cues used in each paper is shown in table 3. Unfortunately, the cue used to express transparency is not stated explicitly by some of the papers, nor are the decisions that lead to the choice of a certain cue always described. Additionally, sometimes the underlying modality is also used as an input device, or to facilitate general human-robot communication.

Never the less, we find an even distribution across above categories for the papers that explicitly specifying the used social cue. While, at first glance, this creates the impression that research is expanding uniformly, speech is a much more narrow category than gestures or visual feedback. As such, we would have expected more papers investigating social cues in these latter categories. We attribute this underutilization of non-verbal cues to the efficiency of natural language. Non-verbal feedback, while being much more diverse in its nature, is often binary in it’s application and communicates one specific thing; a robot is either nodding in agreement, or it is not.

Still, what non-verbal cues lack in diversity, they make up for with robustness, and their positive influence on the interaction is known (Breazeal et al., 2005; Jung et al., 2013). Among the reviewed papers, we frequently find that non-verbal cues are used in combination with other cues, creating multi-modal expressions. This can help reinforce the provided information, or disambiguate other cues for even more robust explanations; for example, the use of pointing to give meaning to the word this. Yet, how to engineer a non-verbal cue for a specific explanation still needs further investigation (Chao et al., 2010), because the quality of a non-verbal explanation changes depending on the precise display of the cue, e.g. the way the movement is executed, or which blinking pattern is used, and because available modalities differ largely by embodiment.

Speech, on the other hand, is a very diverse cue, and is a frequent choice among researchers investigating explainability; likely because it is so ubiquitous in every day human-human interaction. The literature reviewed here largely leverages speech by using databases of pre-authored template sentences (see for example Hayes and Shah (2017)), and fills those templates with contextual information. Which template to use is either hard coded, or is selected dynamically in a rule based fashion, e.g., the robot’s intent or last action (Lütkebohle et al., 2009), or may depend on the robot’s model of the human (Tabrez and Hayes, 2019). Yet, speech is more than just the generation of sentences. Humans respond to sentences, which leads to dialog. This dialog is context dependent, personalised, and situated, and offers many possibilities to improve the agent’s expressiveness; yet, so far, not much work has investigated the use of more complex dialogue models for explainability.

Category Paper
Visual feedback (Boyce et al., 2015; Perlmutter et al., 2016; Chen et al., 2017; Schaefer et al., 2016; Roncone et al., 2017; Poulsen et al., 2019; Akash et al., 2018b; Legg et al., 2019); blinking patterns, faded animations, progress bar (Baraka et al., 2018)
Speech (Lütkebohle et al., 2009; Lee et al., 2013; Wang et al., 2016a; Perlmutter et al., 2016; Hayes and Shah, 2017; Roncone et al., 2017; Gong and Zhang, 2018; Tabrez and Hayes, 2019)
Gesture (Lütkebohle et al., 2009; Kwon et al., 2018; Khoramshahi and Billard, 2019); pointing, gaze, changing the ear color, nodding, head-shaking, shrugging and combination of head and body animations (Chao et al., 2010); lifting weights (Sciutti et al., 2014); pointing, gaze (Perlmutter et al., 2016); physical behavior (Huang et al., 2019); timing of the gesture (Zhou et al., 2017); pick and place task (Lamb et al., 2018);
Table 3. Papers on Explanability by Communication Modality

5. Methods to Achieve Explainability

Next, we report and discuss the methods employed to achieve explainable behaviors in social agents with a specific focus on robots. From a HRI, introducing explainability mechanisms is challenging, as uncertainty is inherent to the whole process of interaction from perception to decision and action. In addition, the methods used to implement explainability require explicit consideration of the human capability to correctly infer the agent goals, intentions or actions from the observable cues proposed by the explainability mechanism.

Looking at one specific implementation, Thomaz and Breazeal (2006) introduced the Socially Guide Machine Learning (SG-ML) framework which seeks to augment traditional machine learning models by enabling them to interact with humans; Two interrelated questions are investigated: (i) How do people want to teach robots? (ii) How do people design robots that learn effectively from natural interaction and instruction? This framework considers a reciprocal and tightly coupled interaction; the machine learner and human instructor cooperate to simplify the task for each other. SG-ML considers explainability as a communicative act that helps the human understand the machine’s internal state, intent, or objective during the learning process.

Table 4

summarizes the papers identified in this survey. They differ in terms of (i) the computational paradigm used - such as supervised or reinforcement learning -, (ii) the nature of explainability mechanism employed - such as generation of predictable/legible behavior, or query of human teachers -, (iii) the nature of communicative actions - such as non-verbal or linguistics based features (see also section


Interactive situations

Most of the situations targeted in the research papers consider interactive robot learning; a human shapes the behavior of a social agent by providing instructions and/or social cues. The situations are illustrated in figure 2. The behavior shaping (Knox and Stone, 2009; Najar et al., 2019) aims to exploit instructions and/or social cues to steer robot actions towards desired behaviors. Various interaction schemes have been proposed including instructions (Grizou et al., 2013; Paléologue et al., 2017), advice (Griffith et al., 2013), demonstrations (Argall et al., 2009), guidance (Suay and Chernova, 2011; Najar et al., 2016), and evaluative feedback (Knox et al., 2013; Najar et al., 2016). Then, computational models, mostly based on machine learning, are exploited to modify agent states and actions in order to achieve a certain goal .

As mentioned by Broekens and Chetouani (2019), most of computational approaches for social agents consider a primary task, e.g., learning to pick an object, and explainability arises as a secondary task by either communicating the agent’s internal states, intentions, or future goals. Given the literature, it is possible to distinguish the nature of actions performed by the agent such as task oriented actions and communication oriented actions . are used to achieve a goal such as sorting objects. are used by the agent to communicate with humans such as queries or pointing to objects. This follows from the speech act theory (Koller and Searle, 1970) that treats communication as actions that have an intent and an effect (a change of mind by the receiver of the communication).

In such a context, explainability mechanisms are employed to reduce uncertainty during the shaping process using communicative actions before, during, or after performing a task action , which will change future agent state . The challenge for explainability mechanisms is then to transform agent states and task oriented actions into communicative actions either using natural language or non-verbal cues (see table 4). To tackle this challenge, several explainability mechanisms have been proposed for human-robot interaction.

Computational paradigms

As summarized in table 4

, various computational paradigms are employed ranging from supervised learning to reinforcement learning. In supervised learning, a machine is trained using data, e.g., different kind of objects, which is well labeled by a human supervisor. In the case of interactive robot learning, the supervisor is a human teacher and provides labeled examples based on queries and explanations given by the robot. The interactive nature of the learning limits the amount of human supervision. In addition, the level of expertise of the human is rarely questioned and considered ground truth. To tackle these challenges,

Chao et al. (2010)

proposed an active learning framework for teaching robots to classify pairs of objects (

). Active learning is paradigm that allows the learner to interactively query the supervisor to obtain labels for new data (

). By doing so, the robot both improves the learning as well as the explainability by communicating about uncertainty. Often, active learning is a form of semi-supervised learning, combining human supervision and processing of unlabelled data.

Another paradigm is reinforcement learning (RL), and it is one of the three basic machine learning paradigms. Here, an agent acts in an environment, observing its state and receiving a reward

. Learning is performed by trial-and-error through interaction with the environment, and leverages the Markov Decision Process (MDP) framework. MDPs are used to model the agent’s policy and help with decision making under stochasticity. This paradigm allows to represent, plan, or learn an optimal policy - a mapping from current state to action. Analyzing this policy gives insights about future and current states and actions.

Hayes and Shah (2017) developed a policy explanation framework based on analysis of execution traces of an RL agent. The method generates explanations () about the learned policy () in a way that is understandable to humans. In RL, theoretical links between (task) learning schemes and emotional theories could be performed. Broekens and Chetouani (2019) investigate how temporal difference learning could be employed to develop an emotionally expressive learning robot that is capable of generating explainable behaviors via emotions.

To increase the understanding of robot intentions by humans, the notion of legibility is often introduced in robotics. Recently, Chakraborti et al. (2018a) discussed the overlap between explicability, legibility, predictability and transparency. In their work, they show that they all aim to study the notion of understanding what intentions the observer will ascribe to agent by observing its behavior. In this framework, legibility and transparency are considered as similar notions that aim to reduce ambiguity over possible goals that might be achieved. One key concept for achieving legibility/transparency is to explicitly consider a model of the human observer. Then, the methods aim at finding plans that disambiguate possible goals. Dragan et al. (2013) proposed a mathematical model able to distinguish between legibility, which is defined as the ability to anticipate the goal, and predictability, which is defined as the ability to predict the trajectory. The mathematical model is exploiting observer expectations to generate legible/transparent plans. Huang et al. (2019) propose to model how people infer objectives from observed behavior, and then it selects those behaviors that are maximally informative. Inverse reinforcement learning is used to model observer capability of inferring intentions from the observation of agent behaviors. Explainability implementation based on these methods consider that task-oriented actions () and communicative actions () are performed through the same channel, e.g., movement of the robot’s arm both achieves a task and communicates the goal (Sheikholeslami et al., 2018), (Sciutti et al., 2014).

Explainability mechanisms

In AI, explanability deals with the understanding of the mechanisms by which a model works as is usually opposed to black-box-ness (Arrieta et al., 2019)

. Deep learning is a typical black-box machine learning method that achieves data representation learning using multiple non-linear transformations. Contrarily, a linear model is considered as transparent since the model is fully understandable and explorable by means of mathematical analysis and methods. In

Arrieta et al. (2019), the authors argue that a model is considered to be explainable if by itself it is understandable and proposed various levels of model transparency: (i) simulatibility: ability of being simulated or thought about strictly a human, (ii) decomposability: ability to explain each part of the model, and (iii) algorithmic transparency: ability of the user to understand the process followed by the model to produce any given output from its input data.

Intrinsic transparency refers to models that are transparent by design (Figure 2). Post-hoc (external) transparency refers to transparency methods that are applied after the decision or execution of actions. Post-hoc methods are decoupled from the model and aim to enhance transparency of models that are not transparent by design (intrinsic) (Lipton, 2018). Post-hoc transparency methods such as visualization, mapping the policy to natural language, explanation are used to convert a non-transparent model into a more transparent one.

A large body of work aiming to achieve explainability in human-agent interaction does not explicitly refer to such definitions that originate from machine learning. Consequently, a strict categorization of them in such categories. Explainability can be either performed by external mechanisms that are separable from the task execution (visualization) (Sciutti et al., 2014; Perlmutter et al., 2016; Zhou et al., 2017) or intrinsically computed by the agent policy (e.g. query learning, communicative gestures) (Chao et al., 2010; Sheikholeslami et al., 2018).

Implementation of explainability can also be done at several levels. For example via the Situation-awareness-based Agent Transparency (SAT) model, which is based on a Belief, Desire, Intention (BDI) architecture, considers three levels of transparency: Level 1–Basic Information (current status/plan); Level 2–Reasoning Information; Level 3–Outcome Projections (Boyce et al., 2015; Ososky et al., 2014).

Mapping agent policy () to natural language () is a methodology that is more and more employed in AI to design explainable AI (Arrieta et al., 2019). In HRI, a similar trends is observed (Akash et al., 2018b; Hayes and Shah, 2017; Tabrez and Hayes, 2019; Wang et al., 2016b). The challenge will be to map agent policy to both verbal and non-verbal cues (see also section 4)

Author(s) Task Goal Agent Algorithm Explainability Algorithm

Lütkebohle et al. (2009)
Explain to a robot affordances of objects (name and graspability) Facilitate efficient learning through mixed-initiative dialog custom system based on the XCF toolkit focusing on perceptual analysis, task generation, and dialog-oriented generation Not specified

Chao et al. (2010)
Teach Simon robot the meaning of 4 words Recognition of paired configurations of objects Supervised Learning Active Learning to query a human teacher about a demonstration within the context of a social dialogue

Lee et al. (2010)
Evaluate robots response (forwarning + style of recovery) on a hypothetical scenario Create a better service experience for users of a snack selling robot None Static explanation based on assigned condition

Lee et al. (2013)
Evaluate robots response (forwarning + style of recovery) on a hypothetical scenario Discover changes in trust based on participant’s understanding of a robot’s decision making process None None

Sciutti et al. (2014)
Observe motion of iCub robot Investigate if robot motion can be understood in the same way as human movement Static display of robot behavior static display of transparency queues

Boyce et al. (2015)
Observe a virtual agent moving around in a simulated environment Assess the amount of trust elicited by the system based on the degree of transparency shown Not specified Different levels of static transparency mechanisms based on experimental condition

Perlmutter et al. (2016)
Instruct a PR-2 robot to pick and place objects Assess if screen or VR headset is useful to provide users with information on how the robot works internally Simulated language understanding Static display of transparency queues

Floyd and Aha (2016, 2017)
Issue commands and monitor robot behavior in a virtual patrol scenario Use explainable behavior for adaptation Behavior adaptation using probabilistic methods Select feedback based on how closely current behavioral change matches past behavioral change where this explanation was used
 Chen et al. (2017) Study 1: simulation of an autonomous squat member providing information support, Study 2: participants rated abstract plans to complete different missions Enable users to accurately assess a situation and calibrate their trust in the system while balancing workload Not specified Static explanation based on assigned condition

Schaefer et al. (2016)
Sit in a drive simulator for an autonomous car and perform a taxi/chauffeur task Increase user trust in driverless cars Not specified Not specified

Wang et al. (2016b)
Simulated recon mission where human and robot search buildings for potential threats Dynamically generate explanations to maintain user’s trust in an unreliable system Use PsychSim to generate a belief of the world (using a POMDP) that is updated based on the user’s actions Use natural-language templates to convert POMDP state into explanation

Wang et al. (2016a)
Simulated recon mission where human and robot search buildings for potential threats Participants collaborate with autonomous robot in recon mission Reinforcement learning on POMDP Static addition of robot’s confidence in prediction

Hayes and Shah (2017)
Simulation of a grid-world delivery task, inverse pendulum, and part inspection task Delivery Task, Stabilization Task, Inspection task Mapping an action query to a policy explanation, MDPs representation, Collection of Boolean classifiers (communicable predicates) to provide meaningful abstraction over low-level information Autonomously synthesize robot policy descriptions and respond to both general and targeted queries by human collaborators

Zhou et al. (2017)
Observe motion of 6DoF actuator Create transparent robot motion Static display of robot behavior Static display of transparency queues

Roncone et al. (2017)
Joint construction of flat-pack furniture Reduce workload for human and make overall task more efficient POMDP-planner Not specified explicitly

Chakraborti et al. (2017)
Review proposed plan to solve a maze and assess the optimality of it Evaluate explanation generation algorithms None Selection of human generated explanation database by category

Sreedharan et al. (2017)
Humans observe the agent’s plan presented on a map and judge it’s optimality Provide explanations that help humans understand why robot plan is optimal Tree search exploring the domain’s graph MEGA algorithm

Gong and Zhang (2018)
observe robot performing various household maintenance tasks in a simulated apartment Generate natural language explanations of robot intent Not specified Probabilistic algorithm to select utterance and timing of utterance

Kwon et al. (2018)
Observe virtual robot fail to perform a series of tasks and assess the reason for failure; tasks: lift, push, pull, pull down, push sideways Make the reason for incapability aparent to observer Standard motion planning Custom cost function and distance metrics in planner

Poulsen et al. (2019)
Evaluate robot behavior in a hypothetical interaction scenario between a robot and caregiver Discover changes in trust based on participant’s understanding of a robot’s decision making process None None

Baraka et al. (2018)
Observe LED lights on robot to infer robot’s internal state Allow robot to communicate it’s state efficiently CoBot stack Algorithm to select expressive animation based on current robot state

Lamb et al. (2018)
Collaborative pick and place task, where human and robot perform a handover to move an object Establish smooth handover behaviors between robots and humans Model based closed loop controller Use the proposed control algorithm with specific parameters to achieve human like behavior

Sheikholeslami et al. (2018)
Study 1: humans were asked to pack items into a box; Study 2: Humans observe a robot packing items into a box based on the model learned from study 1 Generate human interpretable reaching behaviors Motion planner

Setting the parameters using parameter estimation from mocap experiment

Akash et al. (2018a)
Simulated recon mission where human and robot search buildings for potential threats Investigate influence of transparency on trust in the system None Static explanation based on assigned condition

Akash et al. (2018b)
Simulated recon mission where human and robot search buildings for potential threats Validate the proposed algorithm for choosing explanations Not specified Leverage POMDP-based trust and workload model to choose type of explanation to be provided in each consecutive scenario

Huang et al. (2019)
Observe a simulated autonomous car driving on a road Allow human observers to deduce robot’s objective function more quickly Static example behaviors Model human understanding as inverse reinforcement learning problem

Tabrez and Hayes (2019)
Collaborative puzzle game inspired by Sudoku, where both players arrange colors in a grid while adhering to constraints Assess utility of explanation when detecting errors in human behavior

RARE: Reward Augmentation and Repair through Explanation based on Partially Observable Markov Decision Process (POMDP) and Hidden Markov Models (HMMs)

estimate collaborator’s reward function during joint task execution and use communicative action if model differs from robot’s model

Legg et al. (2019)
Humans and machines jointly label a dataset to improve prediction accuracy of ML model Improve sample efficiency of ML algorithm Different machine learning algorithms combined with various metrics to select samples for labeling Static display of transparency queues

Khoramshahi and Billard (2019)
Scenario 1: manipulation task where assists user processing a piece of wood; Scenario 2: robot and user are jointly carrying a heavy object Intelligently and compliantly adapt motion to the intention of the human Behavior adaptation algorithm that smoothly blends between behaviors based on human input None

Table 4. Explanability Alogorithms and Tasks
Figure 2. Agent and Transparency Mechanisms. Intrinsic transparency refers to models that are transparent by design. Post-hoc (external) transparency refers to transparency methods that are applied after the decision or execution of actions. Transparency could be either performed by external mechanisms that are separable from the task execution (visualization) or intrinsically computed by the agent policy (e.g. query learning, communicative gestures)


6. Evaluation Methods

Existing work assesses the effects of transparency on a variety of scales including, but not limited to, self-reported understanding of the agent (Gong and Zhang, 2018), amount of successful task completions (Wang et al., 2016b), amount of false decisions (Wang et al., 2016b), task completion time (Chao et al., 2010), amount of irredeemable mistakes (Tabrez and Hayes, 2019) or trust in automation (Boyce et al., 2015). During our review three major categories of measurements emerged:

  • Trust measures how willing a user is to agree with a decision of a robot - based on the provided justification -, how confident a user is about the robot’s internal workings (internal state), or if the user agrees with the plan provided by the robot (intent). It is measured using a self-report scale.

  • Robustness measures the avoidance of failure during the interaction. Typically researchers want to see if the robot’s intent has been communicated correctly. It is often measured observational, e.g., by counting the frequency of successful achievements of a goal.

  • Efficiency measures how quickly the task is completed. The common hypothesis behind using this measure is that the a user can adapt better to a more transparent robot, and form a more efficient team. It is commonly measured by wall clock time, or number of steps until the goal.

Among these measures, trust received the most attention, though be that mainly via online studies. While there is large variance in which scale is used (often scales are self-made), a common element in all studies is to use of self-reports.

Although the consensus is that the presence of expressivity generally increases trust (see table 5), how effective a particular social cue is in doing so has received a lot less attention. Comparisons that do exist often fail to find a significant difference between them (Wang et al., 2016b; Boyce et al., 2015). Similarly, due to the large range of mechanisms tested - and the even larger array of scenarios -, there is little work on how robust a specific mechanism performs across multiple scenarios. Hence, while some form of explainability seems to be clearly better then none, which specific mechanism to choose for a specific situation remains an open question.

Less studied, but no less important, is the effect of explainability on robustness of the interaction. Research on the interplay between expressivity and robustness uses tasks where mistakes are possible, and measures how often these mistakes occur (Boyce et al., 2015; Perlmutter et al., 2016). The core idea is that participants create better mental models of the robot when it’s using explainability mechanisms. Better models will lead to better predictions of the robot’s future behavior, allowing participants to anticipate low performance of the robot, and to avoid mistakes in task execution. However, experimental evidence on this hypothesis is conflicting, with the majority of studies showing support for the idea (Perlmutter et al., 2016), and other studies finding no significant difference (Boyce et al., 2015). As the majority does find a positive effect, we can conclude that transparency does help improve reliability, although not in all circumstances. A more detailed account of when it doesn’t remains a topic for future experimental work.

Finally, efficiency is a metric that some researchers have considered while manipulating expressivity. It has been operationalized by comparing wall clock time until task completion across conditions (Chao et al., 2010), or time till human response (Chen et al., 2017). Of the three types of measures, this type has received the least attention, and the findings are quite mixed. About half of the analyzed papers find that making robots explainable makes the team more efficient, while the other half finds no difference. However, a clear explanation for these conflicting findings remains a topic of future work.

Type Outcome Papers
Robustness positive (Chen et al., 2017; Wang et al., 2018, 2016b; Kwon et al., 2018; Sciutti et al., 2014; Lamb et al., 2018; Huang et al., 2019; Perlmutter et al., 2016; Baraka et al., 2018)
Robustness negative
Robustness non-significant (Chao et al., 2010)
Robustness no statistical test (Legg et al., 2019; Sreedharan et al., 2017; Ososky et al., 2014; Sheikholeslami et al., 2018)
Trust positive (Chao et al., 2010; Chen et al., 2017; Lee et al., 2010; Boyce et al., 2015; Baraka et al., 2018; Zhou et al., 2017; Wang et al., 2016a, 2018; Schaefer et al., 2016)
Trust negative
Trust non-significant (Akash et al., 2018b)
Trust no statistical test (Akash et al., 2018a; Chakraborti et al., 2017; Kwon et al., 2018; Tabrez and Hayes, 2019; Poulsen et al., 2019; Floyd and Aha, 2016, 2017; Ososky et al., 2014; Gong and Zhang, 2018)
Efficiency positive (Lee et al., 2010; Wang et al., 2016b; Akash et al., 2018b)
Efficiency negative
Efficiency non-significant (Perlmutter et al., 2016; Wang et al., 2016a; Chen et al., 2017; Chao et al., 2010)
Efficiency no statistical test (Roncone et al., 2017; Chakraborti et al., 2017)
other any (Lütkebohle et al., 2009; Evans et al., 2017; Chakraborti et al., 2018b; Khoramshahi and Billard, 2019; Hayes and Shah, 2017)
Table 5. Papers on Transparency by Measure

Table 5 shows the core papers grouped by the evaluation methods discussed above, and indicates if the effect of expressivity on it was positive, negative, or non-significant. One important note is that many papers introduce a measurement called accuracy; however, usage of this term differs widely between authors. For example, Chao et al. (2010) used accuracy to refer to the robot’s performance after a teaching interaction, hence it being a measure of robustness, whereas Baraka and Veloso (2018)’s accuracy referred to people self-rated ability to predicting the robot’s move correctly, a measure of trust.

In sum, there is overwhelming evidence that explainability offers a clear benefit to virtual robots in building trust, with some support for physical robots, too. Additionally, there is evidence that explainability can decrease the chance of an unsuccessful interaction (improve robustness). However, papers looking to improve efficiency of the interaction find mixed results. A possible explanation for this could be that, whilst explainability makes the interaction more robust, the time added for the robot to display, and for the human to digest the additional information nullifies the gain in efficiency. On top of this analysis, this section identified the following open questions: (1) Is a particular explainability mechanism best suited for a specific type of robot, a specific type of scenario, or both? (2) What are good objective measures with which we can measure trust in the context of explainability? (3) Why does explainability have a mixed impact on the efficiency of the interaction?

7. Discussion

In above sections we provided a focused view on four key aspects of the field: (1) definitions used, and the large diversity thereof, (2) how social cues are used to implement explainable agents, (3) algorithms used to link explainability mechanisms to the robot’s state or intent, and (4) the measurements to assess the effectiveness of the explainability mechanisms. What is missing is a discussion of how these aspects relate to each other when looked at from a foot view, and a discussion of the limitations of our work.

It is almost self-explanatory that the scenario chosen to study a certain explainability mechanism depends on the author’s research goal. As such, it is unsurprising that we can find a large diversity of tasks, starting from evaluation in pure simulation (Hayes and Shah, 2017), or discussions of hypothetical scenarios (Lee et al., 2010; Poulsen et al., 2019) all the way to joint furniture assembly (Roncone et al., 2017).

The most dominant strand of research has its origin in decision making, and mainly views the robot as a decision support system (Akash et al., 2018a, b; Chen et al., 2017; Floyd and Aha, 2016, 2017; Boyce et al., 2015; Wang et al., 2018, 2016b, 2016a). In this line of research, transparency is mostly commonly defined via the SAT-model (Chen et al., 2018). One of the key questions is how much a person will trust the robot’s suggestions, based on how detailed the given justification for the robot’s decision is. While these studies generally test a virtual agent shaped like a robot, the findings here can be easily generalized to the field of human-computer-interaction (HCI), due to their design. Hence, SAT-model based explanations can help foster trust not only in HRI, but also in the domain of expert systems and AI. Hence, this work partially overlaps with the domain of explainable AI (XAI).

The second strand of research sets itself apart by using humans as pure observers (Chakraborti et al., 2017; Gong and Zhang, 2018; Huang et al., 2019; Sreedharan et al., 2017; Sciutti et al., 2014; Baraka et al., 2018; Zhou et al., 2017; Sheikholeslami et al., 2018; Kwon et al., 2018; Chakraborti et al., 2018b). Common scenarios focus around communicating the robot’s internal state or intent by having humans observe a physical robot (Sheikholeslami et al., 2018; Sciutti et al., 2014; Baraka et al., 2018) or video recordings/simulations of them (Zhou et al., 2017; Baraka et al., 2018; Kwon et al., 2018). Other researchers chose to show maps of plans generated by the robot and explanations thereof (Chakraborti et al., 2017; Gong and Zhang, 2018; Huang et al., 2019; Sreedharan et al., 2017); the researchers’ aim here was to communicate the robot’s intent. In all scenarios the goal is typically to improve robustness, although other measures have been tested.

Particularly well done here is the work of Baraka et al. (2018), who first describe how to enhance a robot with LED lights to display it’s internal state, use crowd sourcing to generate expressive patterns for the LEDs, and then validate the pattern’s utility in both a virtual and a physical user study. This pattern of having participants - typically from Amazon Mechanical Turk (AMT) - generate expressive patterns in a first survey, and then validate them in a follow up study has also been employed by Sheikholeslami et al. (2018) in a pick-and-place scenario. We think that this crowd sourcing approach deserves special attention, as it will likely lead to a larger diversity of candidate patterns as compared to an individual researcher generating them. Considering the wide availability of AMT this is a tool future researchers should leverage.

A third strand of research investigates explainability in interaction between a human and a robot (Schaefer et al., 2016; Lamb et al., 2018; Roncone et al., 2017; Perlmutter et al., 2016; Tabrez and Hayes, 2019; Lütkebohle et al., 2009; Chao et al., 2010) or a human and an AI system (Legg et al., 2019). Studies in this strand investigate the impact of different explainability mechanisms on various interaction scenarios, and if they are still useful when the human-robot dyad is given a concrete task. This is important, because users can focus their full attention on the explainable behavior in the observer scenarios; in interaction scenarios, on the other hand, they have to divide their attention. Research in this strand is more heterogeneous, likely due to the increased design complexity of an interaction scenario. At the same time, the amount of research done, i.e. the number of papers identified, is less than the research done following the observational design above; likely because of the added complexity. Never the less, we argue that more work on this strand is needed, as we consider testing explainability mechanisms meant for HRI in an interaction as the gold standard for determining their utility and effectiveness.

Finally, some researchers have looked at participant’s responses to hypothetical scenarios (Lee et al., 2010; Poulsen et al., 2019). The procedure in these studies is to first describe a scenario to participants in which a robot uses an explainability mechanism during an interaction with a human. Then, participants are asked to give their opinion about this interaction, which is used to determine the utility of the mechanism. This method can be very useful during the early design stages of an interaction, and can help find potential flaws in the design before spending a lot of time implementing them on a robot. At the same time, it may be a less optimal choice for the final evaluation, especially when compared to the other methods presented above.

Shifting the focus to how results are reported in research papers on explainability, we would like to address two challenges we faced while aggregating the data for this review.

The first challenge is the large diversity and inconsistency of language used in the field. Explainability, transparency, interpretability, and legibility are just a few examples of words used to describe explainability mechanisms. Authors frequently introduce their own terminology when addressing the problem of explainability. While this might allow for a very nuanced differentiation between works, it becomes challenging to properly index all the work done, not only because different authors addressing the same idea may use different terminology, but especially so because different authors addressing different ideas end up using the same terminology.

Other reviews on the topic have pointed this out as well (Chakraborti et al., 2018a; Rosenfeld and Richardson, 2019), and it became a challenge in our review, as we can’t ensure completeness of our keyword search based approach. The most likely cause of this is because the field is seeing rapid growth, and precise terminology is still developing. Instead of proposing our own definitions for these terms, and add to the growing list of competing definitions, we would like to refer the interested reader to the review by Chakraborti et al. (2018a), which provides a compelling set of definitions. While one core requirement for papers in their review is that the robot uses an observer model - a requirement we did not introduce -, their definitions are independent of it and are useful in a more general context.

The second challenge was that many authors only define the explainability mechanism they investigate implicitly. We often had to refer to the concrete experimental design to infer which mechanism was studied. While all the important information is still present in each paper, we think that explicitly stating the used explainability mechanism under study can help discourse around transparency become much more concrete.

In extension, some authors have implemented the explainability mechanisms on robotic systems that are capable of adapting their behavior or perform some kind of learning. In many cases, these learning algorithms were unique implementations, or variations of standard algorithms, e.g., reinforcement learning, which make them very interesting. How to best incorporate an explainability mechanism into such a framework is sill an open question. Unfortunately, we found that the details of the method are often underreported and that we couldn’t provide as detailed an account of what has been done so far as we would have liked (Track et al., 2019). We understand that this aspect is often not the core contribution of a paper, and that space is a constraint. Nevertheless, we would like to encourage future contributions to put more emphasis on how transparency mechanisms are integrated into existing learning frameworks. Technical contributions like this could prove very valuable for defining a standardized approach to achieve explainability in HRI.

8. Open Questions and Challenges

On top of providing a detailed account of the identified literature, we identified a set of open questions in the review above. For convenience we enumerate these questions, as well as current challenges, here:

  1. How could a standardized definition of explainability look like that unifies existing definitions? (Section 3)

  2. Is the choice of explainability mechanism dependent on the combination of robot and environment? Which one is best? (Section 6)

  3. What are good objective measures by which we can measure trust in the context of explainability? (Section 6)

  4. Why does explainability not have a positive impact on the efficiency of the interaction? (Section 6)

  5. Under which circumstances does explainability improve the robustness of the interaction? When does it not? (Section 6)

  6. How do you model the human expectations regarding the agent’s goals and actions? (Section 5)

  7. How do you include the human in the loop? (Section 5)

9. Conclusion

Above we provide a systematic review of the literature in explainable agents. We used keyword based search to identify relevant contributions and provided a detailed analysis of them.

First, we analyzed the definitions of explainability used in each piece, highlighting their heterogeneity. In the process, we identify four main motivations that lead researchers to study explainability: (1) interactive robot/machine teaching, (2) human trust, (3) teamwork, and (4) ethical decision making. We then detail why explainability is important for each, and identify the motivations behind each of the identified papers and provide the specific definition used. Second, we looked at communication modalities used as vehicles to deliver the explainability mechanism. We identified the categories of (1) gestures, (2) speech, and (3) visual feedback, and describe how each provide explainable behaviors. Third, we took stock of the algorithms used to select which part of the interaction should be made explainable. We found that only a small fraction of the work addresses this algorithmic part and most often not in sufficient detail for an in-depth analysis. We hence extend the literature in this section, to draw from other related work to provide a better overview. Fourth, we asked how the impact of explainability is measured in the identified literature. We found that most literature looks at three aspects: (1) trust, (2) robustness, and (3) effectiveness, of which trust and robustness receive the most attention. We look at how these aspects are measured and formulate open questions for future work.

In the discussion three main strands of research emerged: The first focuses on using explainability as a method of justifying the robot’s decision making and to calibrate user trust. The second investigates the effect of explainability mechanisms on humans by making them observe a robot aiming to improve the robustness of the system. The third looks at actual interaction between a human and a robot and asks how explainability mechanisms can be of benefit here.

Finally, we provided a list of open questions and gaps in the literature that we identified during our analysis, in the hope that further investigation will address this fascinating new domain of research.


This work received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 765955 (ANIMATAS Innovative Training Network). This work was also supported by national funds through Fundação para a Ciência e a Tecnologia (FCT) with reference UID/CEC/50021/2019.


  • (1)
  • Akash et al. (2018a) Kumar Akash, Katelyn Polson, Tahira Reid, and Neera Jain. 2018a. Improving Human-Machine Collaboration Through Transparency-based Feedback – Part I : Human Trust and Workload Model. Elsevier 51, 34 (2018), 315–321. https://doi.org/10.1016/j.ifacol.2019.01.028
  • Akash et al. (2018b) Kumar Akash, Tahira Reid, and Neera Jain. 2018b. Improving Human-Machine Collaboration Through Transparency-based Feedback – Part II: Control Design and Synthesis. Elsevier 51, 34 (2018), 322–328. https://doi.org/10.1016/j.ifacol.2019.01.026
  • Alonso and De La Puente (2018) Victoria Alonso and Paloma De La Puente. 2018. System transparency in shared autonomy: A mini review. https://doi.org/10.3389/fnbot.2018.00083
  • Argall et al. (2009) Brenna D. Argall, Sonia Chernova, Manuela M. Veloso, and Brett Browning. 2009. A Survey of Robot Learning from Demonstration. Robotics and Autonomous Systems 57, 5 (2009), 469–483.
  • Arrieta et al. (2019) Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2019. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. (2019). arXiv:1910.10045 http://arxiv.org/abs/1910.10045
  • Baraka et al. (2018) Kim Baraka, Stephanie Rosenthal, and Manuela M. Veloso. 2018. Enhancing human understanding of a mobile robot ’ s state and actions using expressive Lights. In ieeexplore.ieee.org. IEEE. https://doi.org/10.1109/ROMAN.2016.7745187
  • Baraka and Veloso (2018) Kim Baraka and Manuela M. Veloso. 2018. Mobile Service Robot State Revealing Through Expressive Lights: Formalism, Design, and Evaluation. International Journal of Social Robotics 10, 1 (jan 2018), 65–92. https://doi.org/10.1007/s12369-017-0431-x
  • Boyce et al. (2015) Michael W. Boyce, Jessie Y. C. Chen, Anthony R. Selkowitz, and Shan G. Lakhmani. 2015. Effects of Agent Transparency on Operator Trust. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts (HRI’15 Extended Abstracts), Vol. 02-05-Marc. ACM, New York, NY, USA, 179–180. https://doi.org/10.1145/2701973.2702059
  • Breazeal et al. (2005) Cynthia Breazeal, Cory D. Kidd, Andrea L. Thomaz, Guy Hoffman, and Matt Berlin. 2005. Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. 708–713. https://doi.org/10.1109/IROS.2005.1545011
  • Broekens and Chetouani (2019) Joost Broekens and Mohamed Chetouani. 2019. Towards Transparent Robot Learning through TDRL-based Emotional Expressions. IEEE Transactions on Affective Computing (jan 2019), 1–1. https://doi.org/10.1109/taffc.2019.2893348
  • Chakraborti et al. (2018a) Tathagata Chakraborti, Anagha Kulkarni, Sarath Sreedharan, David E. Smith, and Subbarao Kambhampati. 2018a. Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior. CoRR abs/1811.0 (2018). arXiv:1811.09722 http://arxiv.org/abs/1811.09722
  • Chakraborti et al. (2018b) Tathagata Chakraborti, Sarath Sreedharan, Anagha Kulkarni, and Subbarao Kambhampati. 2018b. Projection-Aware Task Planning and Execution for Human-in-the-Loop Operation of Robots in a Mixed-Reality Workspace. In IEEE International Conference on Intelligent Robots and Systems. 4476–4482. https://doi.org/10.1109/IROS.2018.8593830
  • Chakraborti et al. (2017) Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang, and Subbarao Kambhampati. 2017. Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy. arXiv:cs.AI/1701.08317
  • Chao et al. (2010) Crystal Chao, Maya Cakmak, and Andrea L. Thomaz. 2010. Transparent active learning for robots. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 317–324. https://doi.org/10.1109/HRI.2010.5453178
  • Chen et al. (2017) Jessie Y. C. Chen, Michael J. Barnes, Anthony R. Selkowitz, and Kimberly Stowers. 2017. Effects of Agent Transparency on human-autonomy teaming effectiveness. In 2016 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016 - Conference Proceedings. IEEE, 1838–1843. https://doi.org/10.1109/SMC.2016.7844505
  • Chen et al. (2018) Jessie Y. C. Chen, Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael J. Barnes. 2018. Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science 19, 3 (2018), 259–282. https://doi.org/10.1080/1463922X.2017.1315750
  • Doshi-Velez and Kim (2017) Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning.
  • Dragan et al. (2013) Anca D Dragan, Kenton C.T. Lee, and Siddhartha S Srinivasa. 2013. Legibility and Predictability of Robot Motion. In ACM/IEEE International Conference on Human-Robot Interaction. 301–308. https://doi.org/10.1109/HRI.2013.6483603
  • Evans et al. (2017) A.William Evans, Matthew Marge, Ethan Stump, Garrett Warnell, Joseph Conroy, Douglas Summers-Stay, and David Baran. 2017. The future of human robot teams in the army: Factors affecting a model of human-system dialogue towards greater team collaboration. In Advances in Intelligent Systems and Computing, Vol. 499. Springer Verlag, 197–210. https://doi.org/10.1007/978-3-319-41959-6_17
  • Felzmann et al. (2019) Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz, and Aurelia Tamo-Larrieux. 2019. Robots and Transparency: The Multiple Dimensions of Transparency in the Context of Robot Technologies. , 71–78 pages. https://doi.org/10.1109/MRA.2019.2904644
  • Fischer et al. (2018) Kerstin Fischer, Hanna Mareike Weigelin, and Leon Bodenhagen. 2018. Increasing trust in human-robot medical interactions: Effects of transparency and adaptability. Paladyn 9, 1 (2018), 95–109. https://doi.org/10.1515/pjbr-2018-0007
  • Floyd and Aha (2016) Michael W. Floyd and David W. Aha. 2016. Incorporating transparency during Trust-Guided behavior adaptation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 9969 LNAI. Springer Verlag, 124–138. https://doi.org/10.1007/978-3-319-47096-2_9
  • Floyd and Aha (2017) Michael W. Floyd and David W. Aha. 2017. Using explanations to provide transparency during trust-guided behavior adaptation 1. AI Communications 30, 3-4 (2017), 281–294. https://doi.org/10.3233/AIC-170733
  • Georgeff et al. (1999) Michael Georgeff, Barney Pell, Martha Pollack, Milind Tambe, and Michael Wooldridge. 1999. The belief-desire-intention model of agency. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 1555. Springer Verlag, 1–10. https://doi.org/10.1007/3-540-49057-4_1
  • Gong and Zhang (2018) Ze Gong and Yu Zhang. 2018. Behavior Explanation as Intention Signaling in Human-Robot Teaming. In RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication. 1005–1011. https://doi.org/10.1109/ROMAN.2018.8525675
  • Griffith et al. (2013) Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L. Isbell, and Andrea L. Thomaz. 2013. Policy Shaping: Integrating Human Feedback with Reinforcement Learning. In Advances in Neural Information Processing Systems 26, C J C Burges, L Bottou, M Welling, Z Ghahramani, and K Q Weinberger (Eds.). Curran Associates, Inc., 2625–2633. http://papers.nips.cc/paper/5187-policy-shaping-integrating-human-feedback-with-reinforcement-learning.pdf
  • Grizou et al. (2013) Jonathan Grizou, Manuel Lopes, and Pierre-Yves Oudeyer. 2013. Robot learning simultaneously a task and how to interpret human instructions. In 2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL). 1–8. https://doi.org/10.1109/DevLrn.2013.6652523
  • Hayes and Shah (2017) Bradley Hayes and Julie A. Shah. 2017. Improving Robot Controller Transparency Through Autonomous Policy Explanation. In ieeexplore.ieee.org. ACM Press, New York, New York, USA, 303–312. https://doi.org/10.1145/2909824.3020233
  • High-Level Expert Group on Artificial Intelligence (2019) High-Level Expert Group on Artificial Intelligence. 2019. Ethics guidelines for trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  • Huang et al. (2019) Sandy H. Huang, David Held, Pieter Abbeel, and Anca D. Dragan. 2019. Enabling Robots to Communicate Their Objectives. Autonomous Robots 43, 2 (jul 2019), 309–326. https://doi.org/10.1007/s10514-018-9771-0 arXiv:1702.03465
  • Jacucci et al. (2014) Giulio Jacucci, Anna Spagnolli, Jonathan Freeman, and Luciano Gamberini. 2014. Symbiotic interaction: A critical definition and comparison to other human-computer paradigms. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 8820 (2014), 3–20. https://doi.org/10.1007/978-3-319-13500-7_1
  • Jung et al. (2013) Malte F. Jung, Jin J. Lee, Nick DePalma, Sigurdur O. Adalgeirsson, Pamela J. Hinds, and Cynthia Breazeal. 2013. Engaging Robots: Easing Complex Human-robot Teamwork Using Backchanneling. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW ’13). ACM, New York, NY, USA, 1555–1566. https://doi.org/10.1145/2441776.2441954
  • Khoramshahi and Billard (2019) Mahdi Khoramshahi and Aude Billard. 2019. A dynamical system approach to task-adaptation in physical human–robot interaction. Autonomous Robots 43, 4 (apr 2019), 927–946. https://doi.org/10.1007/s10514-018-9764-z
  • Knox and Stone (2009) W. Bradley Knox and Peter Stone. 2009. Interactively Shaping Agents via Human Reinforcement: The TAMER Framework. In Proceedings of the Fifth International Conference on Knowledge Capture (K-CAP ’09). ACM, New York, NY, USA, 9–16. https://doi.org/10.1145/1597735.1597738
  • Knox et al. (2013) W. Bradley Knox, Peter Stone, and Cynthia Breazeal. 2013. Training a Robot via Human Feedback: A Case Study. Lecture Notes in Computer Science, Vol. 8239. Springer International Publishing, Book section 46, 460–470.
  • Koller and Searle (1970) Alice Koller and John R. Searle. 1970. Speech Acts: An Essay in the Philosophy of Language. Language (1970). https://doi.org/10.2307/412428
  • Kwon et al. (2018) Minae Kwon, Sandy H. Huang, and Anca D. Dragan. 2018. Expressing Robot Incapability. In ACM/IEEE International Conference on Human-Robot Interaction. IEEE Computer Society, 87–95. https://doi.org/10.1145/3171221.3171276
  • Lamb et al. (2018) Maurice Lamb, Riley Mayr, Tamara Lorenz, Ali A. Minai, and Michael J. Richardson. 2018. The Paths We Pick Together: A Behavioral Dynamics Algorithm for an HRI Pick-and-Place Task. In ACM/IEEE International Conference on Human-Robot Interaction. IEEE Computer Society, 165–166. https://doi.org/10.1145/3173386.3177022
  • Lee et al. (2013) Jin J. Lee, W. Bradley Knox, Jolie B. Wormwood, Cynthia Breazeal, and David DeSteno. 2013. Computationally Modeling Interpersonal Trust. Frontiers in Psychology 4 (2013).
  • Lee et al. (2010) Min Kyung Lee, Sara Kielser, Jodi Forlizzi, Siddhartha Srinivasa, and Paul Rybski. 2010. Gracefully mitigating breakdowns in robotic services. In 5th ACM/IEEE International Conference on Human-Robot Interaction, HRI 2010. 203–210. https://doi.org/10.1145/1734454.1734544
  • Legg et al. (2019) Phil Legg, Jim Smith, and Alexander Downing. 2019. Visual analytics for collaborative human-machine confidence in human-centric active learning tasks. Human-centric Computing and Information Sciences 9, 1 (dec 2019). https://doi.org/10.1186/s13673-019-0167-8
  • Lipton (2018) Zachary C. Lipton. 2018. The Mythos of Model Interpretability. Queue 16, 3 (jun 2018), 30:31—-30:57. https://doi.org/10.1145/3236386.3241340
  • Lütkebohle et al. (2009) Ingo Lütkebohle, Julia Peltason, Lars Schillingmann, Britta Wrede, Sven Wachsmuth, Christof Elbrechter, and Robert Haschke. 2009. The curious robot - Structuring interactive robot learning. In Proceedings - IEEE International Conference on Robotics and Automation. 4156–4162. https://doi.org/10.1109/ROBOT.2009.5152521
  • Lyons (2013) Joseph B. Lyons. 2013. Being Transparent about Transparency : A Model for Human-Robot Interaction. Trust and Autonomous Systems: Papers from the 2013 AAAI Spring Symposium (2013), 48–53. https://doi.org/10.1021/ja00880a025
  • Najar et al. (2016) Anis Najar, Olivier Sigaud, and Mohamed Chetouani. 2016. Training a robot with evaluative feedback and unlabeled guidance signals. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). 261–266. https://doi.org/10.1109/ROMAN.2016.7745140
  • Najar et al. (2019) Anis Najar, Olivier Sigaud, and Mohamed Chetouani. 2019. Interactively Shaping Robot Behaviour with Unlabeled Human Instructions. arXiv preprint arXiv:1902.0167 (2019).
  • Ososky et al. (2014) Scott Ososky, Tracy Sanders, Florian Jentsch, Peter Hancock, and Jessie Y. C. Chen. 2014. Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. In Unmanned Systems Technology XVI, Robert E. Karlsen, Douglas W. Gage, Charles M. Shoemaker, and Grant R. Gerhart (Eds.), Vol. 9084. International Society for Optics and Photonics, 90840E. https://doi.org/10.1117/12.2050622
  • Paléologue et al. (2017) Victor Paléologue, Jocelyn Martin, Amit K. Pandey, Alexandre Coninx, and Mohamed Chetouani. 2017. Semantic-based interaction for teaching robot behavior compositions. In 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). 50–55. https://doi.org/10.1109/ROMAN.2017.8172279
  • Perlmutter et al. (2016) Leah Perlmutter, Eric M. Kernfeld, and Maya Cakmak. 2016. Situated Language Understanding with Human-like and Visualization-Based Transparency. In Robotics: Science and Systems, Vol. 12. https://doi.org/10.15607/rss.2016.xii.040
  • Poulsen et al. (2019) Adam Poulsen, Oliver K. Burmeister, and David Tien. 2019. Care Robot Transparency Isn’t Enough for Trust. In 2018 IEEE Region 10 Symposium, Tensymp 2018. IEEE, 293–297. https://doi.org/10.1109/TENCONSpring.2018.8692047
  • Roncone et al. (2017) Alessandro Roncone, Olivier Mangin, and Brian Scassellati. 2017. Transparent role assignment and task allocation in human robot collaboration. In Proceedings - IEEE International Conference on Robotics and Automation. IEEE, 1014–1021. https://doi.org/10.1109/ICRA.2017.7989122
  • Rosenfeld and Richardson (2019) Avi Rosenfeld and Ariella Richardson. 2019. Explainability in human–agent systems. Autonomous Agents and Multi-Agent Systems 33, 6 (nov 2019), 673–705. https://doi.org/10.1007/s10458-019-09408-y arXiv:1904.08123
  • Schaefer et al. (2016) Kristin E. Schaefer, Ralph W. Brewer, Joe Putney, Edward Mottern, Jeffrey Barghout, and Edward R. Straub. 2016. Relinquishing manual control collaboration requires the capability to understand robot intention. In Proceedings - 2016 International Conference on Collaboration Technologies and Systems, CTS 2016. IEEE, 359–366. https://doi.org/10.1109/CTS.2016.69
  • Sciutti et al. (2012) Alessandra Sciutti, Ambra Bisio, Francesco Nori, Giorgio Metta, Luciano Fadiga, Thierry Pozzo, and Giulio Sandini. 2012. Measuring Human-Robot Interaction Through Motor Resonance. International Journal of Social Robotics 4, 3 (2012), 223–234. https://doi.org/10.1007/s12369-012-0143-1
  • Sciutti et al. (2014) Alessandra Sciutti, Laura Patanè, Francesco Nori, and Giulio Sandini. 2014. Understanding object weight from human and humanoid lifting actions. IEEE Transactions on Autonomous Mental Development 6, 2 (2014), 80–92. https://doi.org/10.1109/TAMD.2014.2312399
  • Scott et al. (1977) A Carlisle Scott, William J Clancey, Randall Davis, and Edward H Shortliffe. 1977. Explanation Capabilities of Production-Based Consultation Systems. American Journal of Computational Linguistics (1977), 1–50. https://www.aclweb.org/anthology/J77-1006
  • Sheikholeslami et al. (2018) Sara Sheikholeslami, Justin W. Hart, Wesley P. Chan, Camilo P. Quintero, and Elizabeth A. Croft. 2018. Prediction and Production of Human Reaching Trajectories for Human-Robot Interaction. In ACM/IEEE International Conference on Human-Robot Interaction. IEEE Computer Society, 321–322. https://doi.org/10.1145/3173386.3176924
  • Spagnolli et al. (2018) Anna Spagnolli, Lily E. Frank, Pim Haselager, and David Kirsh. 2018. Transparency as an Ethical Safeguard. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Jaap Ham, Anna Spagnolli, Benjamin Blankertz, Luciano Gamberini, and Giulio Jacucci (Eds.), Vol. 10727 LNCS. Springer, Cham, 1–6. https://doi.org/10.1007/978-3-319-91593-7_1
  • Sreedharan et al. (2017) Sarath Sreedharan, Tathagata Chakraborti, and Subbarao Kambhampati. 2017. Balancing Explicability and Explanation in Human-Aware Planning. In AAAI Fall Symposium - Technical Report, Vol. FS-17-01 -. 61–68. https://doi.org/10.24963/ijcai.2019/185 arXiv:1708.00543
  • Suay and Chernova (2011) Halit B. Suay and Sonia Chernova. 2011. Effect of human guidance and state space size on Interactive Reinforcement Learning. In 2011 RO-MAN. 1–6. https://doi.org/10.1109/ROMAN.2011.6005223
  • Tabrez and Hayes (2019) Aaquib Tabrez and Bradley Hayes. 2019. Improving Human-Robot Interaction through Explainable Reinforcement Learning, 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (Ed.). IEEE. arXiv:1808.07261
  • Theodorou et al. (2016) Andreas Theodorou, Robert H. Wortham, and Joanna J. Bryson. 2016. Why is my Robot Behaving Like That? Designing Transparency for Real Time Inspection of Autonomous Robots. AISB Workshop on Principles of Robotics (2016). https://www.researchgate.net/profile/Tony{_}Prescott/publication/312087774{_}AISB2016{_}Symposium{_}on{_}Principles{_}of{_}Robotics/links/586ebffd08ae6eb871be1ecb/AISB2016-Symposium-on-Principles-of-Robotics.pdf{#}page=63http://opus.bath.ac.uk/49713/
  • Thomaz and Breazeal (2006) Andrea L. Thomaz and Cynthia Breazeal. 2006. Transparency and Socially Guided Machine Learning. In International Conference on Development and Learning. 1–146. https://dspace.mit.edu/handle/1721.1/36160https://www.cc.gatech.edu/fac/athomaz/papers/ThomazBreazeal-ICDL06.pdf
  • Thomaz and Breazeal (2008) Andrea L. Thomaz and Cynthia Breazeal. 2008. Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artificial Intelligence 172, 6 (2008), 716–737. https://doi.org/10.1016/j.artint.2007.09.009
  • Track et al. (2019) Robotics Track, Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Främling. 2019. Explainable Agents and Robots: Results from a Systematic Literature Review. Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS ’19) Aamas (2019), 1078–1088. www.ifaamas.org
  • Unger (2012) Christoph Unger. 2012. Cognitive Pragmatics. The Mental Processes of Communication: Bruno G. Bara, translated by John Douthwaite, MIT Press, Cambridge/MA, 2010, 304 pp., ISBN: 978-0-262-01411-3, GBP 28.95 (paperback). Journal of Pragmatics 44, 3 (2012), 332–334. https://doi.org/10.1016/j.pragma.2011.12.001
  • van Melle (1978) William van Melle. 1978. MYCIN: A Knowledge-Based Consultation Program for Infectious Disease Diagnosis. International Journal of Man-Machine Studies 10, 3 (1978), 313–322. https://doi.org/10.1016/S0020-7373(78)80049-2
  • Wang et al. (2018) Hui Wang, Michael Emmerich, and Aske Plaat. 2018. Monte Carlo Q-learning for General Game Playing. (feb 2018). arXiv:1802.05944 http://arxiv.org/abs/1802.05944
  • Wang et al. (2016a) Ning Wang, David V. Pynadath, and Susan G. Hill. 2016a. The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams. In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS (AAMAS ’16). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 997–1005. https://dl.acm.org/citation.cfm?id=2937071
  • Wang et al. (2016b) Ning Wang, David V. Pynadath, and Susan G. Hill. 2016b. Trust calibration within a human-robot team: Comparing automatically generated explanations. In ACM/IEEE International Conference on Human-Robot Interaction, Vol. 2016-April. IEEE Computer Society, 109–116. https://doi.org/10.1109/HRI.2016.7451741
  • Warren (1977) David Warren. 1977.

    Implementing prolog–compiling predicate logic programs.

  • Wick and Thompson (1992) Michael R. Wick and William B. Thompson. 1992. Reconstructive Expert System Explanation. Artificial Intelligence 54, 1-2 (1992), 33–70. https://doi.org/10.1016/0004-3702(92)90087-E
  • Zhou et al. (2017) Allan Zhou, Dylan Hadfield-Menell, Anusha Nagabandi, and Anca D. Dragan. 2017. Expressive Robot Motion Timing. In ACM/IEEE International Conference on Human-Robot Interaction, Vol. Part F1271. 22–31. https://doi.org/10.1145/2909824.3020221 arXiv:1802.01536