Interpretable Mixture of Experts for Structured Data

06/05/2022
by   Aya Abdelsalam Ismail, et al.
16

With the growth of machine learning for structured data, the need for reliable model explanations is essential, especially in high-stakes applications. We introduce a novel framework, Interpretable Mixture of Experts (IME), that provides interpretability for structured data while preserving accuracy. IME consists of an assignment module and a mixture of interpretable experts such as linear models where each sample is assigned to a single interpretable expert. This results in an inherently-interpretable architecture where the explanations produced by IME are the exact descriptions of how the prediction is computed. In addition to constituting a standalone inherently-interpretable architecture, an additional IME capability is that it can be integrated with existing Deep Neural Networks (DNNs) to offer interpretability to a subset of samples while maintaining the accuracy of the DNNs. Experiments on various structured datasets demonstrate that IME is more accurate than a single interpretable model and performs comparably to existing state-of-the-art deep learning models in terms of accuracy while providing faithful explanations.

READ FULL TEXT

page 16

page 17

page 18

research
06/19/2023

B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers

We present a new direction for increasing the interpretability of deep n...
research
10/10/2022

FEAMOE: Fair, Explainable and Adaptive Mixture of Experts

Three key properties that are desired of trustworthy machine learning mo...
research
07/23/2019

Interpretable and Steerable Sequence Learning via Prototypes

One of the major challenges in machine learning nowadays is to provide p...
research
03/13/2020

Neural Generators of Sparse Local Linear Models for Achieving both Accuracy and Interpretability

For reliability, it is important that the predictions made by machine le...
research
12/01/2022

Implicit Mixture of Interpretable Experts for Global and Local Interpretability

We investigate the feasibility of using mixtures of interpretable expert...
research
08/13/2019

Learning Credible Deep Neural Networks with Rationale Regularization

Recent explainability related studies have shown that state-of-the-art D...

Please sign up or login with your details

Forgot password? Click here to reset