Interpretable-AI Policies using Evolutionary Nonlinear Decision Trees for Discrete Action Systems

09/20/2020
by   Yashesh Dhebar, et al.
0

Black-box artificial intelligence (AI) induction methods such as deep reinforcement learning (DRL) are increasingly being used to find optimal policies for a given control task. Although policies represented using a black-box AI are capable of efficiently executing the underlying control task and achieving optimal closed-loop performance – controlling the agent from initial time step until the successful termination of an episode, the developed control rules are often complex and neither interpretable nor explainable. In this paper, we use a recently proposed nonlinear decision-tree (NLDT) approach to find a hierarchical set of control rules in an attempt to maximize the open-loop performance for approximating and explaining the pre-trained black-box DRL (oracle) agent using the labelled state-action dataset. Recent advances in nonlinear optimization approaches using evolutionary computation facilitates finding a hierarchical set of nonlinear control rules as a function of state variables using a computationally fast bilevel optimization procedure at each node of the proposed NLDT. Additionally, we propose a re-optimization procedure for enhancing closed-loop performance of an already derived NLDT. We evaluate our proposed methodologies on four different control problems having two to four discrete actions. In all these problems our proposed approach is able to find simple and interpretable rules involving one to four non-linear terms per rule, while simultaneously achieving on par closed-loop performance when compared to a trained black-box DRL agent. The obtained results are inspiring as they suggest the replacement of complicated black-box DRL policies involving thousands of parameters (making them non-interpretable) with simple interpretable policies. Results are encouraging and motivating to pursue further applications of proposed approach in solving more complex control tasks.

READ FULL TEXT

page 4

page 10

page 14

research
06/16/2019

MoËT: Interpretable and Verifiable Reinforcement Learning via Mixture of Expert Trees

Deep Reinforcement Learning (DRL) has led to many recent breakthroughs o...
research
11/15/2020

CDT: Cascading Decision Trees for Explainable Reinforcement Learning

Deep Reinforcement Learning (DRL) has recently achieved significant adva...
research
09/10/2020

TripleTree: A Versatile Interpretable Representation of Black Box Agents and their Environments

In explainable artificial intelligence, there is increasing interest in ...
research
08/02/2020

Interpretable Rule Discovery Through Bilevel Optimization of Split-Rules of Nonlinear Decision Trees for Classification Problems

For supervised classification problems involving design, control, other ...
research
06/10/2021

Synthesising Reinforcement Learning Policies through Set-Valued Inductive Rule Learning

Today's advanced Reinforcement Learning algorithms produce black-box pol...
research
02/10/2022

Interpretable pipelines with evolutionarily optimized modules for RL tasks with visual inputs

The importance of explainability in AI has become a pressing concern, fo...

Please sign up or login with your details

Forgot password? Click here to reset