Quality Diversity Evolutionary Learning of Decision Trees

08/17/2022
by   Andrea Ferigo, et al.
0

Addressing the need for explainable Machine Learning has emerged as one of the most important research directions in modern Artificial Intelligence (AI). While the current dominant paradigm in the field is based on black-box models, typically in the form of (deep) neural networks, these models lack direct interpretability for human users, i.e., their outcomes (and, even more so, their inner working) are opaque and hard to understand. This is hindering the adoption of AI in safety-critical applications, where high interests are at stake. In these applications, explainable by design models, such as decision trees, may be more suitable, as they provide interpretability. Recent works have proposed the hybridization of decision trees and Reinforcement Learning, to combine the advantages of the two approaches. So far, however, these works have focused on the optimization of those hybrid models. Here, we apply MAP-Elites for diversifying hybrid models over a feature space that captures both the model complexity and its behavioral variability. We apply our method on two well-known control problems from the OpenAI Gym library, on which we discuss the "illumination" patterns projected by MAP-Elites, comparing its results against existing similar approaches.

READ FULL TEXT
research
11/24/2022

ML Interpretability: Simple Isn't Easy

The interpretability of ML models is important, but it is not clear what...
research
12/14/2020

Evolutionary learning of interpretable decision trees

Reinforcement learning techniques achieved human-level performance in se...
research
11/15/2020

CDT: Cascading Decision Trees for Explainable Reinforcement Learning

Deep Reinforcement Learning (DRL) has recently achieved significant adva...
research
07/05/2022

Explainability in Deep Reinforcement Learning, a Review into Current Methods and Applications

The use of Deep Reinforcement Learning (DRL) schemes has increased drama...
research
08/13/2019

Regional Tree Regularization for Interpretability in Black Box Models

The lack of interpretability remains a barrier to the adoption of deep n...
research
08/15/2020

Explainability in Deep Reinforcement Learning

A large set of the explainable Artificial Intelligence (XAI) literature ...
research
04/16/2023

Assisting clinical practice with fuzzy probabilistic decision trees

The need for fully human-understandable models is increasingly being rec...

Please sign up or login with your details

Forgot password? Click here to reset