Constructive Interpretability with CoLabel: Corroborative Integration, Complementary Features, and Collaborative Learning

05/20/2022
by   Abhijit Suprem, et al.
34

Machine learning models with explainable predictions are increasingly sought after, especially for real-world, mission-critical applications that require bias detection and risk mitigation. Inherent interpretability, where a model is designed from the ground-up for interpretability, provides intuitive insights and transparent explanations on model prediction and performance. In this paper, we present CoLabel, an approach to build interpretable models with explanations rooted in the ground truth. We demonstrate CoLabel in a vehicle feature extraction application in the context of vehicle make-model recognition (VMMR). CoLabel performs VMMR with a composite of interpretable features such as vehicle color, type, and make, all based on interpretable annotations of the ground truth labels. First, CoLabel performs corroborative integration to join multiple datasets that each have a subset of desired annotations of color, type, and make. Then, CoLabel uses decomposable branches to extract complementary features corresponding to desired annotations. Finally, CoLabel fuses them together for final predictions. During feature fusion, CoLabel harmonizes complementary branches so that VMMR features are compatible with each other and can be projected to the same semantic space for classification. With inherent interpretability, CoLabel achieves superior performance to the state-of-the-art black-box models, with accuracy of 0.98, 0.95, and 0.94 on CompCars, Cars196, and BoxCars116K, respectively. CoLabel provides intuitive explanations due to constructive interpretability, and subsequently achieves high accuracy and usability in mission-critical situations.

READ FULL TEXT

page 4

page 8

research
07/23/2019

BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth

Interpretability is rising as an important area of research in machine l...
research
06/16/2016

Model-Agnostic Interpretability of Machine Learning

Understanding why machine learning models behave the way they do empower...
research
07/15/2023

Explainable AI with counterfactual paths

Explainable AI (XAI) is an increasingly important area of research in ma...
research
04/01/2019

VINE: Visualizing Statistical Interactions in Black Box Models

As machine learning becomes more pervasive, there is an urgent need for ...
research
12/24/2020

QUACKIE: A NLP Classification Task With Ground Truth Explanations

NLP Interpretability aims to increase trust in model predictions. This m...
research
01/23/2023

Feature construction using explanations of individual predictions

Feature construction can contribute to comprehensibility and performance...
research
04/19/2020

A Biologically Interpretable Two-stage Deep Neural Network (BIT-DNN) For Hyperspectral Imagery Classification

Spectral-spatial based deep learning models have recently proven to be e...

Please sign up or login with your details

Forgot password? Click here to reset