-
An Atari Model Zoo for Analyzing, Visualizing, and Comparing Deep Reinforcement Learning Agents
Much human and computational effort has aimed to improve how deep reinfo...
read it
-
Visualizing and Understanding Atari Agents
Deep reinforcement learning (deep RL) agents have achieved remarkable su...
read it
-
Adaptive coordination of working-memory and reinforcement learning in non-human primates performing a trial-and-error problem solving task
Accumulating evidence suggest that human behavior in trial-and-error lea...
read it
-
Deep Neuroevolution of Recurrent and Discrete World Models
Neural architectures inspired by our own human cognitive system, such as...
read it
-
Explaining Reinforcement Learning to Mere Mortals: An Empirical Study
We present a user study to investigate the impact of explanations on non...
read it
-
Free-Lunch Saliency via Attention in Atari Agents
We propose a new approach to visualize saliency maps for deep neural net...
read it
-
Explain Your Move: Understanding Agent Actions Using Focused Feature Saliency
As deep reinforcement learning (RL) is applied to more tasks, there is a...
read it
Human versus Machine Attention in Deep Reinforcement Learning Tasks
Deep reinforcement learning (RL) algorithms are powerful tools for solving visuomotor decision tasks. However, the trained models are often difficult to interpret, because they are represented as end-to-end deep neural networks. In this paper, we shed light on the inner workings of such trained models by analyzing the pixels that they attend to during task execution, and comparing them with the pixels attended to by humans executing the same tasks. To this end, we investigate the following two questions that, to the best of our knowledge, have not been previously studied. 1) How similar are the visual features learned by RL agents and humans when performing the same task? and, 2) How do similarities and differences in these learned features correlate with RL agents' performance on these tasks? Specifically, we compare the saliency maps of RL agents against visual attention models of human experts when learning to play Atari games. Further, we analyze how hyperparameters of the deep RL algorithm affect the learned features and saliency maps of the trained agents. The insights provided by our results have the potential to inform novel algorithms for the purpose of closing the performance gap between human experts and deep RL agents.
READ FULL TEXT
Comments
There are no comments yet.