Interpretable pipelines with evolutionarily optimized modules for RL tasks with visual inputs

02/10/2022
by   Leonardo Lucio Custode, et al.
12

The importance of explainability in AI has become a pressing concern, for which several explainable AI (XAI) approaches have been recently proposed. However, most of the available XAI techniques are post-hoc methods, which however may be only partially reliable, as they do not reflect exactly the state of the original models. Thus, a more direct way for achieving XAI is through interpretable (also called glass-box) models. These models have been shown to obtain comparable (and, in some cases, better) performance with respect to black-boxes models in various tasks such as classification and reinforcement learning. However, they struggle when working with raw data, especially when the input dimensionality increases and the raw inputs alone do not give valuable insights on the decision-making process. Here, we propose to use end-to-end pipelines composed of multiple interpretable models co-optimized by means of evolutionary algorithms, that allows us to decompose the decision-making process into two parts: computing high-level features from raw data, and reasoning on the extracted high-level features. We test our approach in reinforcement learning environments from the Atari benchmark, where we obtain comparable results (with respect to black-box approaches) in settings without stochastic frame-skipping, while performance degrades in frame-skipping settings.

READ FULL TEXT

page 3

page 8

research
09/29/2020

Explainable AI without Interpretable Model

Explainability has been a challenge in AI for as long as AI has existed....
research
02/12/2018

Hybrid Decision Making: When Interpretable Models Collaborate With Black-Box Models

Interpretable machine learning models have received increasing interest ...
research
05/16/2019

Knowledge-Based Sequential Decision-Making Under Uncertainty

Deep reinforcement learning (DRL) algorithms have achieved great success...
research
06/07/2023

Autonomous Capability Assessment of Black-Box Sequential Decision-Making Systems

It is essential for users to understand what their AI systems can and ca...
research
09/20/2020

Interpretable-AI Policies using Evolutionary Nonlinear Decision Trees for Discrete Action Systems

Black-box artificial intelligence (AI) induction methods such as deep re...
research
09/16/2021

Beyond Average Performance – exploring regions of deviating performance for black box classification models

Machine learning models are becoming increasingly popular in different t...
research
12/02/2022

COmic: Convolutional Kernel Networks for Interpretable End-to-End Learning on (Multi-)Omics Data

Motivation: The size of available omics datasets is steadily increasing ...

Please sign up or login with your details

Forgot password? Click here to reset