Interpretable Reinforcement Learning with Ensemble Methods

09/19/2018
by   Alexander Brown, et al.
0

We propose to use boosted regression trees as a way to compute human-interpretable solutions to reinforcement learning problems. Boosting combines several regression trees to improve their accuracy without significantly reducing their inherent interpretability. Prior work has focused independently on reinforcement learning and on interpretable machine learning, but there has been little progress in interpretable reinforcement learning. Our experimental results show that boosted regression trees compute solutions that are both interpretable and match the quality of leading reinforcement learning methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/27/2017

Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning

This is the Proceedings of NIPS 2017 Symposium on Interpretable Machine ...
research
12/14/2020

Evolutionary learning of interpretable decision trees

Reinforcement learning techniques achieved human-level performance in se...
research
07/10/2019

Interpretable Dynamics Models for Data-Efficient Reinforcement Learning

In this paper, we present a Bayesian view on model-based reinforcement l...
research
06/29/2017

Interpretability via Model Extraction

The ability to interpret machine learning models has become increasingly...
research
02/23/2022

Learning Relative Return Policies With Upside-Down Reinforcement Learning

Lately, there has been a resurgence of interest in using supervised lear...
research
12/08/2019

Contrast Trees and Distribution Boosting

Often machine learning methods are applied and results reported in cases...
research
03/18/2023

Interpretable Reinforcement Learning via Neural Additive Models for Inventory Management

The COVID-19 pandemic has highlighted the importance of supply chains an...

Please sign up or login with your details

Forgot password? Click here to reset