Summarising and Comparing Agent Dynamics with Contrastive Spatiotemporal Abstraction

01/17/2022
by   Tom Bewley, et al.
0

We introduce a data-driven, model-agnostic technique for generating a human-interpretable summary of the salient points of contrast within an evolving dynamical system, such as the learning process of a control agent. It involves the aggregation of transition data along both spatial and temporal dimensions according to an information-theoretic divergence measure. A practical algorithm is outlined for continuous state spaces, and deployed to summarise the learning histories of deep reinforcement learning agents with the aid of graphical and textual communication methods. We expect our method to be complementary to existing techniques in the realm of agent interpretability.

READ FULL TEXT

page 2

page 6

page 13

research
02/10/2022

DDA3C: Cooperative Distributed Deep Reinforcement Learning in A Group-Agent System

It can largely benefit the reinforcement learning process of each agent ...
research
08/15/2023

Deep reinforcement learning for process design: Review and perspective

The transformation towards renewable energy and feedstock supply in the ...
research
01/31/2012

Empowerment for Continuous Agent-Environment Systems

This paper develops generalizations of empowerment to continuous states....
research
05/31/2020

Distributed Voltage Regulation of Active Distribution System Based on Enhanced Multi-agent Deep Reinforcement Learning

This paper proposes a data-driven distributed voltage control approach b...

Please sign up or login with your details

Forgot password? Click here to reset