DeepAI AI Chat
Log In Sign Up

Interpretability of Blackbox Machine Learning Models through Dataview Extraction and Shadow Model creation

by   Rupam Patir, et al.
IIIT Delhi

Deep learning models trained using massive amounts of data tend to capture one view of the data and its associated mapping. Different deep learning models built on the same training data may capture different views of the data based on the underlying techniques used. For explaining the decisions arrived by blackbox deep learning models, we argue that it is essential to reproduce that model's view of the training data faithfully. This faithful reproduction can then be used for explanation generation. We investigate two methods for data view extraction: hill-climbing approach and a GAN-driven approach. We then use this synthesized data for creating shadow models for explanation generation: Decision-Tree model and Formal Concept Analysis based model. We evaluate these approaches on a Blackbox model trained on public datasets and show its usefulness in explanation generation.


page 1

page 2

page 3

page 4


A framework for the extraction of Deep Neural Networks by leveraging public data

Machine learning models trained on confidential datasets are increasingl...

Explaining the Predictions of Any Image Classifier via Decision Trees

Despite outstanding contribution to the significant progress of Artifici...

MaskIt: Masking for efficient utilization of incomplete public datasets for training deep learning models

A major challenge in training deep learning models is the lack of high q...

Mass Personalization of Deep Learning

We discuss training techniques, objectives and metrics toward mass perso...