Explainable robotic systems have become an interesting field of research since they endow robots with the ability to explain their behavior to the human counterpart [Anjomshoae19, Sheh17]. One of the main benefits of explainability is to increase the trust in Human-Robot Interaction (HRI) scenarios [Wang15, Wang16].
To get robots into our daily-life environments, an explainable robotic system should provide clear explanations specially focused on non-expert end-users in order to understand the robot’s decisions. Often, such explanations, given by robotic systems, have been focused on interpreting the agent’s decision based on its perception of the environment [Pocius19, Lengerich17]
. Whereas, little work has provided explanations of how the action selected is expected to help it to achieve its goal. For instance, the use of the visual sensory modality attempts to understand how a deep neural network makes decisions considering a visual state representation. In general, prior work has been focused on state-based explanation[Hendricks16, Li18, Iyer18], e.g., given explanations take the form of: I chose action because of this feature of the state. Nevertheless, explanations from a goal-oriented perspective have so far been less addressed and, therefore, there exists a gap between explaining the robot’s behavior from state features and the help to achieve its aims.
In this work, we propose a robotic scenario where the robot has to learn a task using Reinforcement Learning (RL) [Sutton18]. The aim of RL is to endow an autonomous agent with the ability to learn a new skill by interacting with the environment. While RL has been shown to be an effective learning approach in diverse areas such as human cognition [Gershman17, Palminteri17], developmental robotics [Cangelosi15, Cruz18], and video games [Kempka16, Vinyals17], among others, an open issue is the lack of a mechanism that allows the agent to clearly communicate the reasons why it chooses certain actions given a particular state. Therefore, it is not easy for a person to entrust important tasks to an AI-based system (e.g. robots) that cannot justify its reasoning [Adadi18]. In this regard, our outcome-focused approach is concerned with the outcomes of each decision, e.g., an explanation may take the form of: action gives an 85% chance of success compared to 38% for action ”.
When interacting with the environment, an RL agent will learn a policy to decide which action to take from a certain state. In value-based RL methods the policy uses Q-values to determine the course of actions to follow. The Q-values are not necessarily meaningful in the problem domain, but rather in the reward function domain. Hence, they do not allow the robot to explain its behavior in a simple manner to a person with no knowledge about RL or machine learning techniques.
In this paper, we propose three different approaches that allow a learning agent to explain, using domain language, the decision of selecting an action over the other possible ones. In these approaches, explanations are given using the probability of success, i.e., the probability of accomplishing the task following particular criteria related to the scenario. Thus, an RL agent is able to explain its behavior not only in terms of Q-values or the probability of selecting an action, but rather in terms of the necessity to complete the intended task. For instance, using a Q-value, an agent might give an explanation as: I decided to go left because the Q-value associated with the action left in the current state is , which could be the highest Q-value in the reward function domain but is pointless for a non-expert user. Whereas using the probability of success, an agent might provide an explanation as: I chose to go left because that has a probability of reaching the goal successfully, which is much more understandable for a non-expert end-user.
The proposed methods differ in their approach to determining the probability of success. Each approach trades off accuracy against space complexity or applicable problem domain. This paper illustrates empirically the benefits and shortcomings of each approach. We have tested our approach in a simulated robotic scenario. The scenario consists of a robot navigation task where we have used both deterministic and stochastic state transitions.
The first method uses a Memory-based eXplainable RL (MXRL) approach, which we have previously introduced [Cruz19] to compute the probability of success in both bounded and unbounded grid-world scenarios. This approach uses a high space complexity, but provides an accurate probability of success. The MXRL approach is used to verify the accuracy of two estimation methods developed in this paper. The new approaches in this paper include a learning-based and a phenomenological-based, which both significantly reduce the space complexity. This reduction in space complexity also allows these methods to be better suited for domains requiring a continuous state representation. The former allows the RL agent to learn the probability of success while the Q-values are learned. The latter uses a logarithmic transformation to compute the probability of success directly from the Q-values.
The paper is organized in the following sections. Section 2 presents related works divided in three subsections: reinforcement learning subsection introduces the basics of the RL approach; explainable reinforcement learning section discusses previous works developed as a part of the explainable artificial intelligence (XAI) framework; and, trust and explainable robotic systems presents previous work in the direction of explainability to enhance trust in HRI scenarios. Section 3 introduces the outcome-focused approaches for explaining robot behavior using the probability of success. Three approaches are presented in this section: memory-based approach, learning-based approach, and phenomenological-based approach. Section 4 presents the experimental set-up which consists of a simulated robotic scenario for a navigation task using both deterministic and stochastic transitions. Section 5 illustrates, through the analysis of experimental results, that both new approaches are highly accurate and memory-efficient when compared to the memory-based approach. Finally, section 6 concludes over the obtained results where more in-depth discussion is tackled taking also into account limitations and future directions.
2 Explainable reinforcement learning in robotics
2.1 Reinforcement learning
RL bases its initial ideas on Aristotle’s law of contiguity [Warren16], which states that things that occur close to others in time or space are immediately associated. Moreover, RL also takes motivation from classic conditioning or stimulus-response learning [Pavlov27], and the instrumental conditioning or stimulus-behavior learning [Thorndike11].
Nowadays, RL is studied as a decision-making mechanism in both cognitive and artificial agents [Cangelosi15]. In RL, there is no explicit instructor but rather the awareness of how the environment responds to actions performed by the learning agent. Therefore, an agent should be able to sense the environment’s state and perform actions in order to transit to a new state. In other words, an agent must learn from its own experience [Kober13]. The usual interactive loop between an RL agent and its environment is depicted on the right-hand side of Fig. 1, in the interaction between the agent and the environment, where it is possible to observe that the robot performs an action from the state to reach a new state and a reward signal as a response from the environment.
Formally, an RL agent has to learn a policy , where is the set of states and the set of available actions, to produce the highest possible reward from a state [Sutton18]. In psychology, the policy is known as a set of stimulus-response rules [Kornblum90]. The optimal policy is denoted by and the optimal action-value function is denoted by and defined as:
The optimal action-value function is solved through the Bellman optimality equation for , as shown in Eq. 2.
where is the current state, the taken action, the next state reached after performing action from the state , and is an action that could be taken from . In Eq. 2, represents the probability of reaching the state given the current state and the selected action . Finally, is the reward signal received after performing action from the state to reach the state .
To solve Eq. 2, an alternative is to use the on-policy method SARSA [Rummery94]. SARSA is a temporal-difference learning method which iteratively updates the state-action values using the Eq. 3 as follows:
2.2 Explainable reinforcement learning
Machine learning techniques are getting more attention everyday in different areas of our daily life. Applications in fields such as robotics, autonomous driving cars, assistive companions, video games, among others are constantly shown in the media [Gunning17]. There are different alternatives to model intelligent agents, e.g., by using phenomenological (white-box) models, empirical (black-box) models, or hybrid (grey-box) models [Cruz07, Cruz10]. Explainable artificial intelligence (XAI) has emerged as a prominent research area that aims to provide black-box AI-based systems the ability to give human-like and user-friendly explanations to non-expert end-users [Miller18, Dazeley20]. XAI research is motivated by the need to provide transparent decision-making that people can trust and accept [Fox17].
In the area of Explainable Reinforcement Learning (XRL), there have been several works trying to provide agents with explanation mechanisms. However, most of them have been mainly focused on giving technical explanations for end-users. For instance, Shu et al. [Shu17] have introduced an approach for hierarchical and interpretable skill acquisition using human descriptions to decompose the tasks into a hierarchical plan with understandable actions. Hein et al. [Hein18]
have combined RL with genetic programming (GP) for interpretable policies. They have tested their approach using the mountain car and cart-pole balancing RL benchmarks. However, the provided explanations are only for the learned policy employing equations for that instead of a natural-like representation. Verma et al.[Verma18] have introduced the so-called programmatically interpretable reinforcement learning (PIRL) framework for verifiable agent policies. However, the framework works with symbolic inputs considering only deterministic policies, not including stochastic ones.
Moreover, Wang et al. [Wang18] proposed an explainable recommendation system using an RL framework. Pocius et al. [Pocius19] utilized saliency maps as a way to explain agent decisions in a partially-observable game scenario. Thus, they focus on providing visual explanation with Deep RL. Madumal et al. [Madumal19], inspired by cognitive science, proposed to use causal models to derive causal explanations. Nevertheless, the causal model had to be previously known for the specific domain.
Another closely related field is explainable planning, whose primary goal is to help end-users to better understand the plan produced by a planner [Fox17]. In [Sukkerd18], Sukkerd et al. propose a multi-objective probabilistic planner for a simple robot navigation task. They provide verbal explanations of quality-attribute objectives and properties, however, these rely on assumptions about the preference structure on quality attributes.
2.3 Trust and explainable robotic systems
As robots are giving their first steps into domestic scenarios, they are more likely to work with humans in teams. Therefore, if a robot is endowed with the ability to explain its behavior to non-expert users, this may lead to an increase in the trust given to it by the human user [Wang15] as shown within Fig. 1. In this regard, some works have used explanations as a way of increasing trust in HRI scenarios. For instance, Want et al. [Wang16] proposed a domain-independent approach to generate explanations and measure their impact in trust with behavioral data from a simulated human-robot team task. Their experiments showed that using explanations improved transparency, trust and team performance. Lomas et al. [Lomas12] developed a prototype system to answer people’s questions about robot actions. They assumed the robot uses a cost map-based planner for low-level actions and a finite state machine high-level planner to respond to specific questions. Yang et al. [Yang17] presented a simulated dual-task environment, where they treated the trust as an evolving variable, in terms of the user experience.
Furthermore, Sander et al. [Sanders14] proposed to use different modalities to evaluate the effect on transparency in an HRI scenario. They also varied the level of information provided by the robot to the human and measured the trust responses. In their study, participants reported higher trust levels when the level of information was constant, however, no significant differences were found when using a different communication modality. Haspiel et al. [Haspiel18] carried out a study about the importance of timing when giving explanations. They used four different automated vehicle driving conditions, namely, no explanation, explanation seven seconds before an action, one second before an action, and seven seconds after an action. They found that earlier explanations lead to higher trust by end-users.
Moreover, the term of explainable agency has been also used in HRI scenarios to refer to robots engaged in answering questions about its reasons for the decision-making process [Langley16, Anjomshoae19]. Langley et al. propose the elements of explainable agency as content that support explanations, an episodic memory to record states and actions, and access to its experience [Langley17]. However, in their work, they do not implement the proposed approach. Sequeira et al. [Sequeira19a] developed a framework to provide explanations employing thoughtful analysis in three levels of the RL agent interaction history. Later they extended their work to include a user study to identify agents’ capabilities and limitations [Sequeira19b]. Tabrez & Hayes [Tabrez19]
used an HRI scenario to correct a sub-optimal human model behavior, formulated as a Markov decision process (MDP). In their research, they reported that users found the robot more helpful, useful, and more intelligent when explanations and justification were provided. However, the proposed framework still lacks the comprehensibility of the optimal policy.
3 Outcome-focused explanations
As discussed previously, although the behavior of a robot using RL might be technically explained in terms of the Q-values or also in algorithmic terms, in this work, we look for explanations that make sense for all kinds of possible end-users and not only to those who are able to understand the underlying learning process behind an artificial agent.
To endow artificial agents with the ability to explain the performed actions is currently one of the most critical and complex challenges in future RL research [Gunning17]. This challenge is especially important, considering RL-based systems often interact with human observers. Therefore, it is essential that non-expert end-users can understand agents’ intentions as well as to obtain more details from the execution in case of a failure [Dulac19].
As aforementioned, although there is increasing literature in different XAI subfields, such as explainable planners, interpretable RL, or explainable agency, just a few works are addressing the XRL challenge in robotic scenarios. In some of those works, although they are in a certain way focused on XRL, they have different aims than ours, e.g., to explain the learning process using saliency maps from a computational vision perspective, especially when using deep reinforcement learning as in[Pocius19]. In this paper, we focus on explaining goal-oriented decisions to provide an understanding to the user of what motivates the robot’s specific actions from different states, taking into account the problem domain.
In HRI scenarios, there are many questions which could arise from a non-expert user when interacting with a robot. Such questions include what, why, why not, what if, how to [Lim09]. Some examples are:
What: What are you doing?
Why: Why did you step forward in the last movement?
Why not: Why did you not turn to the right in this situation?
What if: What if you would have turned to the left in the last movement?
How to: How to return to the initial position?
However, from a non-expert end-user perspective, we can consider the most relevant questions as to ’why?’ and ’why not?’ [Madumal19], e.g., ’why did the agent perform action from state ’? Hence, we focus this approach on answering these kinds of questions using an understandable domain language. Thus, our approach explains how the agent’s selected action is the preferred choice based on its likelihood of achieving its goal. This is achieved by determining the probability of success. Once the probability of reaching the final state is determined the agent will be able to provide the end-user an understandable explanation of why one action is preferred over others when in a particular state.
In the following subsections, we present different approaches aiming to explain why an agent selects an action in a specific situation. As discussed, we focus our analysis on the probability of success as a way to support the agent’s decision as this will be more intuitive for a non-expert user than an explanation based directly on the Q-values. We introduce three different approaches to estimate the probability of success. The memory-based approach develops a transition network of the domain during learning and has been previously presented by us applied to a grid-world scenario [Cruz19]; whereas the learning-based approach uses a -value learned in parallel with the Q-value. Finally, the phenomenological-based approach proposes a model to relate the Q-values directly to the probability of success by the use of equations representing a phenomenological-like model. Therefore, in this work, we extend our previous approach adding two additional alternatives to compute the probability of success with the aim to utilize fewer memory resources and to be usable in non-deterministic and deep learning domains.
3.1 Memory-based approach
In [Cruz19] we proposed a memory-based explainable reinforcement learning (MXRL) approach to compute the probability of success using an RL agent with an episodic memory as suggested in [Langley17]. By accessing the memory, it was possible to understand the agent’s behavior based on its experience by using introspection in three levels [Sequeira19a], i.e., environment analysis (to observe certain and uncertain transitions), interaction analysis (to observe state-action frequencies), and meta-analysis (to obtain combined information from episodes and agents). We implemented a list of state-action pairs: comprising the transactions the agent performed during its learning process.
To compute the probability of success , we computed the total number of transitions and the number of transitions involved in a success sequence . To obtain , we used the transactions previously saved into the list . Every time the agent reached the final state, we computed the probability considering transitions involved in the path towards the goal state.
We have implemented the on-policy method SARSA [Rummery94] and the softmax action selection method. Inside algorithm 1 can be seen the MXRL approach to train RL agents using episodic memory. Whereas in line 7 each executed state-action pair is saved into the memory, line 23 computes the final probabilities of success for each episode.
Using an episodic memory, we have previously shown that an agent is able to explain its behavior in an understandable manner for non-expert end-users at any moment during its operation[Cruz19]. However, in this approach, the use of memory increases rapidly and the use of resources is , where is the number of states, is the number of actions per state, the average length of the episodes, and the number of episodes. Therefore, this approach is not suitable to high/continuous state or action situations, such as real-world robotics scenarios.
3.2 Learning-based approach
Although to compute the probability of success by means of the episodic memory is possible and has previously lead to good results, one of the main problems is the increasing amount of memory needed as the problem dimensionality enlarges. In this regard, an alternative to explain the behavior in terms of the probability of success is by learning it through the agent’s learning process.
To learn the probability of success, we propose to maintain an additional set of state-action Q-values. Therefore, learning in parallel the probability of success values as a state-action table is a more understandable manner to explain the behavior to non-expert users. We refer to this table as -table and as -value to an individual value inside the table. While using the MXRL approach the episodic memory could increase unrestricted, when using another table, as proposed in this learning-based approach, the additional memory needed is fixed to the size of the -table, i.e., . This represents a doubling of the memory requirements of RL which implies a constant increase to the base algorithm and is therefore negligible.
Similarly to Q-values, to learn the -values implies to update the estimations after each performed action. In our approach, we employ the same learning rate for both values, however, the main change with respect to the implemented temporal-difference algorithm is that we do not use a reduced discount factor , or in other words, we set it to to consider the total sum of future rewards. From the discount reward perspective, to use does not represent an issue to solve the underlying optimization problem since the Q-values are used for learning purposes, which indeed use a discount factor to guarantee the convergence.
As using the discount factor , the agent associates each action based on the total sum of all future rewards, nevertheless, we want to learn a -value as a probability of success . Therefore, we do not use the reward to update the -table, instead, we use a success flag which consists of a value equal to to indicate that the task is being failed, or a value equal to to indicate that the task has been completed. In such a way, the agent learns the probability of success considering the sum of the probabilities to finalize the task in the future.
The update of the -values is performed according to Eq. (4) as follows:
where is the taken action at the state . and are the probability of success values considering the state and the action at timestep and respectively. Moreover, is the learning rate and is the success flag used to indicate if the task has been completed or not.
Inside algorithm 1, line 13 implements the learning-based approach updating the -table. While the -table and Q-table share a similar structure and learning mechanism, their roles are quite different with the Q-values driving the agent’s policy while the -values are used for explanatory purposes. Separating the role of these values allows the use of and terms best suited to each purpose. Therefore, might be sparse with non-zero values only when the task is successfully completed, whereas might be free to take on any form, including incorporation of reward shaping terms [Ng99] which might be difficult to interpret for a non-expert.
The learning method described in Eq. (4) suits only when using discrete representation. However, the same learning-based approach can be extended to continuous and larger discrete scenarios by using a function approximators, such as neural networks. As usual, we would need a neural approximator to learn the Q-values and an additional neural approximator to learn the -values in parallel.
3.3 Phenomenological-based approach
Even though the learning-based approach previously presented represents an improvement when compared to the MXRL approach, in terms of using fewer memory resources, it still requires some memory to keep the -values updated. Moreover, during the learning process, time is also needed for computations and for learning episodes in order to obtain a better estimation.
Bearing in mind the temporal difference learning approach shown in Eq. (3), the optimal Q-values represent possible future reward, therefore, they are expressed in the reward function domain. Thus, if an agent reaches a terminal state in an episodic task obtaining a reward , the associated Q-value approximates this reward. In a simplified manner, we can consider any Q-value as the terminal reward multiplied by the times the discount factor is applied, as shown in Eq. (5):
Using this derivation, when converges to the true value, we can compute how far away the agent is from obtaining the total reward for any state, using directly the Q-values. Therefore, using the previous argument, we have computed the estimated distance to the reward, as shown in Eq. (6):
where is the Q-value, the reward obtained when the task is completed successfully, and is the estimated distance, in number of actions, to the reward.
After computing the estimated distance to obtain a reward, we use this value as the base to estimate an estimated probability of success . Using a constant transformation, we weight the estimated distance by . Therefore, what we are actually performing is a logarithmic base transformation to estimate the probability of success as shown in Eq. (7) and Eq. (8). We also take into account stochastic transitions represented by the parameter. We will discuss this parameter further in the following section.
This transformation is carried out in order to shape the probability of success curve as a common base logarithm (base 10) that fits the behavior of both previously introduced approaches. Moreover, we shift the curve to a region where the probability values are plausible by adding and multiplying by . Finally, in order to restrict the value of the probability of success, considering , we compute the rectification shown in Eq. (9), which basically consists of assigning a value of when the result is less than that, or when the result is greater than .
4 Experimental set-up
In this section, we describe the experimental scenario used to test our approaches. We have used an episodic robot scenario in which the transitions can be modeled as a graph. The scenario consists of a simulated robot navigation task comprising of six rooms and three possible actions to perform from each room. The simulated scenario is shown in Fig. 2 using the CoppeliaSim robot simulator [Rohmer13]. Furthermore, two variations are considered for the proposed scenario: deterministic and stochastic transitions.
In the proposed scenario, a mobile robot has to learn how to navigate from a fixed initial position (room 0) through different rooms considering two possible paths to find the table within the goal position (room 5). Moreover, it is also possible to observe that every room in the middle of the paths, i.e., from room to room has an exit that leaves the level. These transitions are treated as leading to an aversive region and, therefore, once any of these exits have been taken, the robot is unable to come back and needs to stop the current learning episode and restart a new one from the initial position. In the scenario, the agent starts from the same initial position and may transit two symmetric paths towards the goal position.
The proposed model of the state-action transitions is depicted in Fig. 3. We have defined six states corresponding to the six rooms and three possible actions from each state. Actions are defined taking into account the robot’s perspective. The possible actions are as follows:
, move through the left door,
, move through the right door, and
, stay in the same room.
The transitions above are, in principle, all deterministic, meaning that once the action is selected by the agent to be performed, the expected result is the next state in all situations, as shown in the state-action transition function (Fig. 3). Nevertheless, in some situations, transitions may not be deterministic and may include a certain level of stochasticity or uncertainty, as in partially observable problems, or also sometimes due to noisy sensors, for instance. Therefore, taking into consideration these situations, we have also used a parameter to include stochastic transitions.
When stochastic transitions are used, the next state reached, as a result of performing an action, is any of the possible future states from where the action has been performed. For instance, if the action is being performed from the state , the agent is expected to be in the state once the action is completed, taking into account deterministic transitions (). However, considering stochasticity (i.e., ), there is also a probability the agent could finish in the state or the state since these two are also reachable from the state (by performing the action and the action respectively).
For the learning process, the reward function returns a positive reward of when the agent reaches the final state and a negative reward of in case the agent reaches an aversive region, i.e., when it leaves the scenario. Eq. 10 shows the reward function for .
Even though we are aware that the proposed scenario is rather simple from the learning perspective, and thus manageable by a reinforcement learning agent, in this work we are focusing on giving a basis for explanations with the methods described in the previous section.
5 Experimental results
In this section, we describe the results obtained when using the three proposed approaches in the scenario described in the last section. All the experiments have been performed using the on-policy learning algorithm SARSA, as shown in Eq. 3, and the softmax action selection method, where , where is the current agent’s state, an action in the set of available actions, is the temperature parameter, and the exponential function. A total of agents have been trained, the following plots show the average results. For the analysis, following we plot the obtained Q-values, estimated distance to task completion, and probabilities of success using the three proposed methods. We show these values in all cases for the actions performed from the initial state , nevertheless, similar plots can be obtained for each state.
The parameters used for the training are: learning rate , discount factor , and softmax temperature , all of them were experimentally determined and related to our scenario. The previous parameters are mentioned here just as a reference, however, they are not so relevant for this work. These parameters do affect the agent’s ability to learn a solution. However, we are interested in understanding the decisions rather than the speed or capacity of the learning agent.
5.1 Deterministic robot navigation task
Initially, we have tested an RL agent moving across the rooms considering deterministic transitions, i.e., performing an action from the state to state always reaching the intended state with probability equal to , or , as defined in the transition function (see Fig. 3).
Fig. 4 shows the Q-values obtained over episodes for the actions of moving to the left , to the right , and staying in the same room from the initial state . It is possible to observe that during the first episodes the agent prefers to perform as a consequence of the collected experience which is shown by the blue line. However, as the learning improves, the three actions converge to similar Q-values above .
In Fig. 5 can be seen the estimated distance (according to Eq. (6)) from the initial state to the reward by performing the three available actions. Contrary to the Q-values, the distance needed to reach the reward decreases over time, starting with more than actions and reaching values close to the minimum. It can be seen that action right converges faster since the estimated distance is computed according to the agent’s experience using the Q-values and the reward. Since the distance is obtained from the Q-values, it can be produced for any of the proposed approaches, however, we use it specifically to compute the probability of success with the phenomenological-based approach.
In Fig. 6 are shown the probabilities of success for the memory-based approach taking the different possible actions from the initial state , i.e., the probability of successfully finishing the task choosing any path from room . The three possible actions from this state, i.e., go to the left room , go to the right room , and stay at the same room are shown using red, blue, and green respectively. In the first episodes, any possible action has a very low probability of success since the agent still does not know how to navigate appropriately and, therefore, often selects an action that leads it out of the floor. Over the episodes, the agent tends to follow the path to its right to reach the goal state, however, after 300 episodes all the probabilities converge to a similar value as the agent collects enough knowledge in all paths.
By using the probability of success as a source of information for a non-expert end-user, the RL agent can more easily explain its behavior and why at a certain point of the learning process one action may be preferred instead of others. Since the computed probabilities of success using the memory-based approach are obtained directly from the performed actions during the episodes, following we use these results to compare to both: the learning-based approach and the phenomenological-based approach. Moreover, we use a noisy signal obtained from the memory-based approach as a control group. For all the approaches, including the noisy signal, we compute the Pearson’s correlation to measure the similarity between the approaches as well as the mean square error (MSE).
Fig. 7 shows the probability of success for the three possible actions from the initial state using the learning-based approach. Similarly to the memory-based approach, the probabilities show the agent initially prefers to perform the action for moving to the right , however, as the previous case, after the learning process, all the probabilities converge to similar values close to 90% of success.
The probabilities of success using the phenomenological-based approach are shown in Fig. 8. As before, the possible actions are shown from the initial state . The evolution of the probabilities behaves similarly over the episodes as in previous approaches. Initially, the agent favors the action to the right room but the three actions reach a similar probability of success after training.
Equivalently as shown by the learning-based approach, the phenomenological-based approach in the first episodes gets a probability of success equal to zero . This initial behavior is due to the fact that the probability of success using the learning-based approach is computed in a similar way as the Q-values, which is updating the -values inside the -table. Likewise, the estimated probability of success using the phenomenological-based approach is computed from the Q-values as it is a numerical transformation from the estimated distance .
Overall, the three proposed approaches have similar behavior when using deterministic transitions reaching similar results in terms of the final probabilities of success from the initial state
and the evolution over the learning process. To further analyze the similarity between the proposed approaches we compute the Pearson’s correlation as well as the MSE with respect to the memory-based approach. Additionally, we have used a control group of probabilities of success as a noisy signal from the memory-based approach using 20% of white noise. We have used that amount of noise since we want to create control data from our baseline approach which are different enough from the original probabilities and, at the same time, distinguishable from each other. However, in this work, we do not test how tolerant our approaches are to respond to possible noise. The resultant noisy probabilities can be seen in Fig. 9.
In Fig. 10 can be seen the correlation matrix for all the approaches. In the figure, axes are the different actions from the initial state for each proposed method. The uppercase letter refers to the action and the lowercase one refers to the method. Thus, , , and are for the actions of going to the left, to the right, and staying in the same room, whereas , , , and are for memory-based, learning-based, phenomenological-based, and noisy approaches respectively. The figure shows that there is a high correlation between the three proposed approaches, while in our noisy control group the values of the correlations are much lower in comparison.
Moreover, Table 1 shows the MSE between the memory-based approach and all the other approaches. It can be seen that the phenomenological-based approach approach has the least amount of errors in relation to the memory-based benchmark, obtaining an MSE lower than for all possible actions, which is achieved with much lower memory-usage than the memory-based approach.
5.2 Stochastic robot navigation task
In this section, we have performed the same robot navigation task but using stochastic transitions instead. In our scenario, to use stochastic transitions means that the RL agent may perform the action from the state to the state and may reach the intended state with a transition probability , or more precisely , with (if deterministic transitions are used, i.e., no stochasticity) taking into consideration the defined transition function. We have introduced a transition probability , or . In other words, a of stochasticity in order to test how coherent are the possible explanations extracted from all the proposed approaches.
Fig. 11 shows the obtained Q-values after the learning process. The actions shown also correspond to the three possibilities, i.e., , , and , from the initial state . Similarly to the use of deterministic transitions, the Q-values converge to similar values after 300 episodes, however, in this case the agent favors the action of going left in the first episodes. Certainly, this is not due to the use of stochastic transitions, but rather the agent in this experiment explored initially that path, which can lead to diverse exploration experiences over different learning processes.
In Fig. 12 is shown the estimated distance in terms of actions to the reward. By using stochastic transitions the distances also decrease over time getting close to the minimal amount of actions. In this case, the action of going to the right needs more time to converge since this path is explored later as can be seen in the Q-values.
Fig. 13 shows the probabilities of success during the learning process from the initial state using stochastic transitions and the memory-based approach. In this case, the agent initially exhibits more experience taking the path to the left; however, after 300 episodes, similarly as before, the probabilities converge to a similar value. Although using stochastic transitions lead to a less overall probability of success, in comparison to the deterministic robot navigation task, the agent is still able to explain in these terms the reasons for its behavior during the learning episodes. Like the previous case, we have used the memory-based approach as a base to compare the other proposed approaches, since these probabilities are obtained directly from the robot experience collected in the episodic memory.
The probabilities of success for the three possible actions from the initial state using the learning-based approach are shown in Fig. 14. Like using the memory-based approach, at the beginning of the training, the agent shows more experience by following the path to the left, but also the three actions converge after training. In this case, the probabilities of success converge to a slightly higher amount in comparison to the memory-based approach. This is due to the fact that these probabilities are computed using the -values from the -table. These values are updated according to Eq. (4), where we set the discount factor and, therefore, the agent is more foresighted taking into account all possible future rewards.
The estimated probabilities of success using the phenomenological-based approach are shown in Fig. 15. The probabilities are for the three possible actions from the initial state as well. The phenomenological-based approach has a similar behavior as the memory-based approach, converging to similar values after the learning process when using stochastic transitions. The estimated probabilities are computed from the Q-values using the Eq. 9 and, therefore, it can be similarly seen that the agent preferred the path to the left at the beginning of the training.
As in the deterministic approach, we have also used a noisy signal in the stochastic robot navigation task as a control group. We obtained the noisy signal from the memory-based approach since this is computed from the actual agent’s interaction during the learning process. Once again, we have added a 20% of white noise using a normal distribution with media, i.e., . The noisy signal can be seen in Fig. 16.
As in the deterministic robot navigation task, we also compute the Pearson’s correlation as well as the MSE to analyze the similarity between the obtained probabilities of success . In Fig. 17 is shown the correlation matrix for all the approaches. The axes contain the different possible actions from the initial state for each proposed method. As in the previous section, the uppercase letter refers to the action and the lowercase letter refers to the method. Although the correlations are lower in comparison to the deterministic robot navigation task due to the use of stochastic transitions, the figure still shows that there is a high correlation between the three proposed approaches, in opposite to the noisy control group, where the values of the correlations are lower in comparison.
Furthermore, Table 2 shows the MSE between the memory-based approach and the other approaches using stochastic transitions. It is observed that once again the phenomenological-based approach is the most similar to the memory-based approach, obtaining errors lower than for all possible actions, which is also achieved using much less memory in comparison to the memory-based approach.
To have a better understanding of the proposed approaches when using them from internal states, in this section, we show the experimental results for the stochastic robot navigation task from the state . Therefore, the same scenario and parameters are used as in the previous experiments. In this case, we only show the results for Q-values, and for the three proposed methods. Fig. 18, 19, 20, and 21 show the results obtained after 300 learning episodes for the Q-values and the probabilities of success using the memory-based approach, the learning-based approach, and the phenomenological-based approach, respectively.
When the robot is placed in the state , the action of moving to the left leads it to a terminal aversive area, therefore, the associated Q-value and probabilities of success are close to zero. Nevertheless, due to stochastic transitions, the robot still is able to reach a different room and, hence, the associated Q-value and probabilities of success are not necessarily null. Conversely, the action of moving to the right has bigger Q-value and probability of success in comparison to staying at the room , since leads the robot to the state which is closer to the reward state. For the obtained probabilities of success using the three approaches, actions of moving right and staying at the same room increase their values in comparison to the Q-value to represent a better estimation of the real probability of success in each case. The action of moving to the left, in all cases remains low with values near to zero.
For instance, after 300 learning episodes and with the robot just performing an action from , an end-user could ask an explanation as: why did you move to the right in the last situation? If the robot uses the probability of success to explain why action has been chosen when being in the state , the following explanation could be provided: In state , I chose to move to the right because it has a probability of success of: (i) , (ii) , or (iii) . The previous values are from using the memory-based, learning-based, and phenomenological-based approaches respectively. As previously discussed, we consider the memory-based approach the more accurate estimation for the probability of success, since this value is obtained directly from the robot’s transitions. In this case, the Q-value is , therefore, the proposed methods approximate more precisely the real probability and potentially give a more informative answer to a non-expert end-user. Equivalently, if the end-user asks an explanation as: why did you not move to the left in the last situation? As before, using the probabilities of success to explain the counterfactual of why action has not been chosen when being in the state , the explanation can be constructed as follows: In state , I did not choose to move to the left because has only a probability of success of: (i) , (ii) , or (iii) . These values also using the memory-based, learning-based, and phenomenological-based approaches respectively. The associated Q-value, in this case, is . Although the deviation is higher, the proposed approaches are able to approximate the probability of success.
In this work, we have presented an explainable robotic system with the idea of improving the end-user trust in an HRI scenario. The proposed scenario has been carried out using a robot simulator where a navigation task has been implemented. We do not get focused on speeding up the learning process, but rather in looking for a plausible alternative of explaining the robot’s behavior during the decision-making process. For this purpose, we have used outcome-focused explanations instead of state-based explanations. Of course, both approaches may be combined and, therefore, an explanation could include these two aspects.
The proposed approaches estimate the probability of success for each action, which in turn allows the agent to explain the robot’s decision to non-expert end-users. By describing decisions in terms of the probability of success, the end-user will have a clearer idea about the robot’s decision in each situation using human-like language. On the contrary, using Q-values to explain the behavior, the end-users will not necessarily obtain a straightforward comprehension unless they have prior knowledge about reinforcement learning or machine learning techniques.
We have proposed three approaches with different characteristics. First, the memory-based approach uses an episodic memory to save the interaction with the environment from which it computes the probability of success. Second, the learning-based approach utilizes a -table where the values of the probability of success are updated as the agent collects more experience during the learning process. Third, the phenomenological-based approach computes the estimated probability of success directly from the Q-values by performing a numerical transformation. The proposed approaches differ in both the amount of memory needed and the kind of RL problem representation where they could be used, although at the current state, the approaches have been developed for tasks where the reward is all received at the end of the episode.
The obtained results using either deterministic or stochastic transitions show that the proposed approaches accomplish similar behavior and converge to similar values, which is also verified through the high correlation level and the MSE computed.
Overall, from the similarity exposed by the three proposed approaches, the learning-based approach and the phenomenological-based approach represent a plausible choice for replacing the memory-based approach to compute the probability of success, using fewer memory resources and being an alternative to other RL problem representations.
As future works, we are planning to test our approaches in continuous scenarios, where using an episodic memory is not feasible. In such scenarios, it is critical to compute the probability of success directly from the Q-values or by learning them using -values with a function approximation (e.g., artificial neural networks) instead of a -table. Moreover, it is important to test how much do impact different values of stochastic transitions represented by the parameter, especially considering that the level of stochasticity may vary between states.
We also plan to use a real-world robot scenario in order to automatically generate explanations to be given to non-expert end-users. Using a real-world scenario, we plan to perform a user study to measure the effectiveness of the probability of success as a metric to enhance the trust in robotic systems.