XOC: Explainable Observer-Classifier for Explainable Binary Decisions

02/05/2019
by   Stephan Alaniz, et al.
0

When deep neural networks optimize highly complex functions, it is not always obvious how they reach the final decision. Providing explanations would make this decision process more transparent and improve a user's trust towards the machine as they help develop a better understanding of the rationale behind the network's predictions. Here, we present an explainable observer-classifier framework that exposes the steps taken through the model's decision-making process. Instead of assigning a label to an image in a single step, our model makes iterative binary sub-decisions, which reveal a decision tree as a thought process. In addition, our model allows to hierarchically cluster the data and give each binary decision a semantic meaning. The sequence of binary decisions learned by our model imitates human-annotated attributes. On six benchmark datasets with increasing size and granularity, our model outperforms the decision-tree baseline and generates easy-to-understand binary decision sequences explaining the network's predictions.

READ FULL TEXT

page 7

page 11

research
02/04/2023

AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Decision Tree Models

Model extraction attack is one of the most prominent adversarial techniq...
research
05/05/2021

Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain

In the present paper we present the potential of Explainable Artificial ...
research
01/15/2021

dtControl 2.0: Explainable Strategy Representation via Decision Tree Learning Steered by Experts

Recent advances have shown how decision trees are apt data structures fo...
research
09/28/2022

Towards Explaining Autonomy with Verbalised Decision Tree States

The development of new AUV technology increased the range of tasks that ...
research
10/15/2021

NeuroView: Explainable Deep Network Decision Making

Deep neural networks (DNs) provide superhuman performance in numerous co...
research
11/25/2022

Interpretability Analysis of Deep Models for COVID-19 Detection

During the outbreak of COVID-19 pandemic, several research areas joined ...
research
11/27/2019

Explaining Models by Propagating Shapley Values of Local Components

In healthcare, making the best possible predictions with complex models ...

Please sign up or login with your details

Forgot password? Click here to reset