Self-Supervised Discovering of Causal Features: Towards Interpretable Reinforcement Learning

03/16/2020
by   Wenjie Shi, et al.
0

Deep reinforcement learning (RL) has recently led to many breakthroughs on a range of complex control tasks. However, the agent's decision-making process is generally not transparent. The lack of interpretability hinders the applicability of RL in safety-critical scenarios. While several methods have attempted to interpret vision-based RL, most come without detailed explanation for the agent's behaviour. In this paper, we propose a self-supervised interpretable framework, which can discover causal features to enable easy interpretation of RL agents even for non-experts. Specifically, a self-supervised interpretable network (SSINet) is employed to produce fine-grained attention masks for highlighting task-relevant information, which constitutes most evidence for the agent's decisions. We verify and evaluate our method on several Atari 2600 games as well as Duckietown, which is a challenging self-driving car simulator environment. The results show that our method renders causal explanations and empirical evidences about how the agent makes decisions and why the agent performs well or badly, especially when transferred to novel scenes. Overall, our method provides valuable insight into the internal decision-making process of vision-based RL. In addition, our method does not use any external labelled data, and thus demonstrates the possibility to learn high-quality mask through a self-supervised manner, which may shed light on new paradigms for label-free vision learning such as self-supervised segmentation and detection.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 9

page 11

research
12/06/2021

Temporal-Spatial Causal Interpretations for Vision-Based Reinforcement Learning

Deep reinforcement learning (RL) agents are becoming increasingly profic...
research
08/18/2022

Visual Explanation of Deep Q-Network for Robot Navigation by Fine-tuning Attention Branch

Robot navigation with deep reinforcement learning (RL) achieves higher p...
research
09/04/2023

Leveraging Reward Consistency for Interpretable Feature Discovery in Reinforcement Learning

The black-box nature of deep reinforcement learning (RL) hinders them fr...
research
10/07/2020

Causal Curiosity: RL Agents Discovering Self-supervised Experiments for Causal Representation Learning

Humans show an innate ability to learn the regularities of the world thr...
research
08/08/2023

BarlowRL: Barlow Twins for Data-Efficient Reinforcement Learning

This paper introduces BarlowRL, a data-efficient reinforcement learning ...
research
03/18/2020

Neuroevolution of Self-Interpretable Agents

Inattentional blindness is the psychological phenomenon that causes one ...
research
07/12/2022

eX-ViT: A Novel eXplainable Vision Transformer for Weakly Supervised Semantic Segmentation

Recently vision transformer models have become prominent models for a ra...

Please sign up or login with your details

Forgot password? Click here to reset