Using Visual Analytics to Interpret Predictive Machine Learning Models

06/17/2016
by   Josua Krause, et al.
0

It is commonly believed that increasing the interpretability of a machine learning model may decrease its predictive power. However, inspecting input-output relationships of those models using visual analytics, while treating them as black-box, can help to understand the reasoning behind outcomes without sacrificing predictive quality. We identify a space of possible solutions and provide two examples of where such techniques have been successfully used in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2018

RuleMatrix: Visualizing and Understanding Classifiers with Rules

With the growing adoption of machine learning techniques, there is a sur...
research
08/28/2021

A Visual Analytics System for Water Distribution System Optimization

The optimization of water distribution systems (WDSs) is vital to minimi...
research
06/04/2020

Cracking the Black Box: Distilling Deep Sports Analytics

This paper addresses the trade-off between Accuracy and Transparency for...
research
07/26/2018

High Dimensional Model Representation as a Glass Box in Supervised Machine Learning

Prediction and explanation are key objects in supervised machine learnin...
research
01/28/2020

Statistical Exploration of Relationships Between Routine and Agnostic Features Towards Interpretable Risk Characterization

As is typical in other fields of application of high throughput systems,...
research
10/27/2020

Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning

The increasing impact of black box models, and particularly of unsupervi...
research
02/24/2023

Visual Privacy: Current and Emerging Regulations Around Unconsented Video Analytics in Retail

Video analytics is the practice of combining digital video data with mac...

Please sign up or login with your details

Forgot password? Click here to reset