Manifold: A Model-Agnostic Framework for Interpretation and Diagnosis of Machine Learning Models

08/01/2018
by   Jiawei Zhang, et al.
0

Interpretation and diagnosis of machine learning models have gained renewed interest in recent years with breakthroughs in new approaches. We present Manifold, a framework that utilizes visual analysis techniques to support interpretation, debugging, and comparison of machine learning models in a more transparent and interactive manner. Conventional techniques usually focus on visualizing the internal logic of a specific model type (i.e., deep neural networks), lacking the ability to extend to a more complex scenario where different model types are integrated. To this end, Manifold is designed as a generic framework that does not rely on or access the internal logic of the model and solely observes the input (i.e., instances or features) and the output (i.e., the predicted result and probability distribution). We describe the workflow of Manifold as an iterative process consisting of three major phases that are commonly involved in the model development and diagnosis process: inspection (hypothesis), explanation (reasoning), and refinement (verification). The visual components supporting these tasks include a scatterplot-based visual summary that overviews the models' outcome and a customizable tabular view that reveals feature discrimination. We demonstrate current applications of the framework on the classification and regression tasks and discuss other potential machine learning use scenarios where Manifold can be applied.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2020

Melody: Generating and Visualizing Machine Learning Model Summary to Understand Data and Classifiers Together

With the increasing sophistication of machine learning models, there are...
research
07/29/2019

explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning

We propose a framework for interactive and explainable machine learning ...
research
10/15/2019

Shapley Homology: Topological Analysis of Sample Influence for Neural Networks

Data samples collected for training machine learning models are typicall...
research
04/23/2020

Adversarial Machine Learning: An Interpretation Perspective

Recent years have witnessed the significant advances of machine learning...
research
06/25/2021

A multi-stage machine learning model on diagnosis of esophageal manometry

High-resolution manometry (HRM) is the primary procedure used to diagnos...
research
01/26/2021

Model-agnostic interpretation by visualization of feature perturbations

Interpretation of machine learning models has become one of the most imp...
research
12/18/2017

Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks

Learning-based representations have become the defacto means to address ...

Please sign up or login with your details

Forgot password? Click here to reset