LCS-DIVE: An Automated Rule-based Machine Learning Visualization Pipeline for Characterizing Complex Associations in Classification

04/26/2021
by   Robert Zhang, et al.
73

Machine learning (ML) research has yielded powerful tools for training accurate prediction models despite complex multivariate associations (e.g. interactions and heterogeneity). In fields such as medicine, improved interpretability of ML modeling is required for knowledge discovery, accountability, and fairness. Rule-based ML approaches such as Learning Classifier Systems (LCSs) strike a balance between predictive performance and interpretability in complex, noisy domains. This work introduces the LCS Discovery and Visualization Environment (LCS-DIVE), an automated LCS model interpretation pipeline for complex biomedical classification. LCS-DIVE conducts modeling using a new scikit-learn implementation of ExSTraCS, an LCS designed to overcome noise and scalability in biomedical data mining yielding human readable IF:THEN rules as well as feature-tracking scores for each training sample. LCS-DIVE leverages feature-tracking scores and/or rules to automatically guide characterization of (1) feature importance (2) underlying additive, epistatic, and/or heterogeneous patterns of association, and (3) model-driven heterogeneous instance subgroups via clustering, visualization generation, and cluster interrogation. LCS-DIVE was evaluated over a diverse set of simulated genetic and benchmark datasets encoding a variety of complex multivariate associations, demonstrating its ability to differentiate between them and then applied to characterize associations within a real-world study of pancreatic cancer.

READ FULL TEXT

page 6

page 8

page 11

page 12

page 14

page 15

page 16

page 18

research
11/10/2021

Beyond Importance Scores: Interpreting Tabular ML by Visualizing Feature Semantics

Interpretability is becoming an active research topic as machine learnin...
research
06/23/2022

STREAMLINE: A Simple, Transparent, End-To-End Automated Machine Learning Pipeline Facilitating Data Analysis and Algorithm Comparison

Machine learning (ML) offers powerful methods for detecting and modeling...
research
06/30/2016

SnapToGrid: From Statistical to Interpretable Models for Biomedical Information Extraction

We propose an approach for biomedical information extraction that marrie...
research
06/15/2021

Improving the compromise between accuracy, interpretability and personalization of rule-based machine learning in medical problems

One of the key challenges when developing a predictive model is the capa...
research
05/08/2020

Explainable Matrix – Visualization for Global and Local Interpretability of Random Forest Classification Ensembles

Over the past decades, classification models have proven to be one of th...
research
06/17/2020

Diverse Rule Sets

While machine-learning models are flourishing and transforming many aspe...

Please sign up or login with your details

Forgot password? Click here to reset