MIXRTs: Toward Interpretable Multi-Agent Reinforcement Learning via Mixing Recurrent Soft Decision Trees

09/15/2022
by   Zichuan Liu, et al.
0

Multi-agent reinforcement learning (MARL) recently has achieved tremendous success in a wide range of fields. However, with a black-box neural network architecture, existing MARL methods make decisions in an opaque fashion that hinders humans from understanding the learned knowledge and how input observations influence decisions. Our solution is MIXing Recurrent soft decision Trees (MIXRTs), a novel interpretable architecture that can represent explicit decision processes via the root-to-leaf path of decision trees. We introduce a novel recurrent structure in soft decision trees to address partial observability, and estimate joint action values via linearly mixing outputs of recurrent trees based on local observations only. Theoretical analysis shows that MIXRTs guarantees the structural constraint with additivity and monotonicity in factorization. We evaluate MIXRTs on a range of challenging StarCraft II tasks. Experimental results show that our interpretable learning framework obtains competitive performance compared to widely investigated baselines, and delivers more straightforward explanations and domain knowledge of the decision processes.

READ FULL TEXT
research
08/21/2020

Rectified Decision Trees: Exploring the Landscape of Interpretable and Effective Machine Learning

Interpretability and effectiveness are two essential and indispensable r...
research
11/15/2020

CDT: Cascading Decision Trees for Explainable Reinforcement Learning

Deep Reinforcement Learning (DRL) has recently achieved significant adva...
research
05/20/2022

On Tackling Explanation Redundancy in Decision Trees

Decision trees (DTs) epitomize the ideal of interpretability of machine ...
research
12/14/2020

Evolutionary learning of interpretable decision trees

Reinforcement learning techniques achieved human-level performance in se...
research
06/19/2019

An Ontology-based Approach to Explaining Artificial Neural Networks

Explainability in Artificial Intelligence has been revived as a topic of...
research
06/22/2023

Can Differentiable Decision Trees Learn Interpretable Reward Functions?

There is an increasing interest in learning reward functions that model ...
research
05/02/2023

Expertise Trees Resolve Knowledge Limitations in Collective Decision-Making

Experts advising decision-makers are likely to display expertise which v...

Please sign up or login with your details

Forgot password? Click here to reset