Explain Your Move: Understanding Agent Actions Using Focused Feature Saliency

12/23/2019
by   Piyush Gupta, et al.
10

As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our approach generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweights irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare our approach with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that our approach generates saliency maps that are more interpretable for humans than existing approaches.

READ FULL TEXT

page 3

page 5

page 6

page 8

page 13

page 14

research
01/18/2021

Benchmarking Perturbation-based Saliency Maps for Explaining Deep Reinforcement Learning Agents

Recent years saw a plethora of work on explaining complex intelligent ag...
research
12/09/2019

Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep RL

Saliency maps have been used to support explanations of deep reinforceme...
research
08/07/2019

Free-Lunch Saliency via Attention in Atari Agents

We propose a new approach to visualize saliency maps for deep neural net...
research
10/29/2020

Human versus Machine Attention in Deep Reinforcement Learning Tasks

Deep reinforcement learning (RL) algorithms are powerful tools for solvi...
research
07/30/2019

Confounder-Aware Visualization of ConvNets

With recent advances in deep learning, neuroimaging studies increasingly...
research
12/17/2016

Learning to predict where to look in interactive environments using deep recurrent q-learning

Bottom-Up (BU) saliency models do not perform well in complex interactiv...
research
11/27/2018

Quality-Aware Multimodal Saliency Detection via Deep Reinforcement Learning

Incorporating various modes of information into the machine learning pro...

Please sign up or login with your details

Forgot password? Click here to reset