DeepAI AI Chat
Log In Sign Up

Quality Diversity Evolutionary Learning of Decision Trees

by   Andrea Ferigo, et al.

Addressing the need for explainable Machine Learning has emerged as one of the most important research directions in modern Artificial Intelligence (AI). While the current dominant paradigm in the field is based on black-box models, typically in the form of (deep) neural networks, these models lack direct interpretability for human users, i.e., their outcomes (and, even more so, their inner working) are opaque and hard to understand. This is hindering the adoption of AI in safety-critical applications, where high interests are at stake. In these applications, explainable by design models, such as decision trees, may be more suitable, as they provide interpretability. Recent works have proposed the hybridization of decision trees and Reinforcement Learning, to combine the advantages of the two approaches. So far, however, these works have focused on the optimization of those hybrid models. Here, we apply MAP-Elites for diversifying hybrid models over a feature space that captures both the model complexity and its behavioral variability. We apply our method on two well-known control problems from the OpenAI Gym library, on which we discuss the "illumination" patterns projected by MAP-Elites, comparing its results against existing similar approaches.


ML Interpretability: Simple Isn't Easy

The interpretability of ML models is important, but it is not clear what...

Evolutionary learning of interpretable decision trees

Reinforcement learning techniques achieved human-level performance in se...

CDT: Cascading Decision Trees for Explainable Reinforcement Learning

Deep Reinforcement Learning (DRL) has recently achieved significant adva...

Explainability in Deep Reinforcement Learning, a Review into Current Methods and Applications

The use of Deep Reinforcement Learning (DRL) schemes has increased drama...

Regional Tree Regularization for Interpretability in Black Box Models

The lack of interpretability remains a barrier to the adoption of deep n...

Optimal Interpretability-Performance Trade-off of Classification Trees with Black-Box Reinforcement Learning

Interpretability of AI models allows for user safety checks to build tru...

Predicting and Explaining Behavioral Data with Structured Feature Space Decomposition

Modeling human behavioral data is challenging due to its scale, sparsene...