Actionable Interpretation of Machine Learning Models for Sequential Data: Dementia-related Agitation Use Case

by   Nutta Homdee, et al.

Machine learning has shown successes for complex learning problems in which data/parameters can be multidimensional and too complex for a first-principles based analysis. Some applications that utilize machine learning require human interpretability, not just to understand a particular result (classification, detection, etc.) but also for humans to take action based on that result. Black-box machine learning model interpretation has been studied, but recent work has focused on validation and improving model performance. In this work, an actionable interpretation of black-box machine learning models is presented. The proposed technique focuses on the extraction of actionable measures to help users make a decision or take an action. Actionable interpretation can be implemented in most traditional black-box machine learning models. It uses the already trained model, used training data, and data processing techniques to extract actionable items from the model outcome and its time-series inputs. An implementation of the actionable interpretation is shown with a use case: dementia-related agitation prediction and the ambient environment. It is shown that actionable items can be extracted, such as the decreasing of in-home light level, which is triggering an agitation episode. This use case of actionable interpretation can help dementia caregivers take action to intervene and prevent agitation.


Global Model Interpretation via Recursive Partitioning

In this work, we propose a simple but effective method to interpret blac...

Interpreting Black Box Predictions using Fisher Kernels

Research in both machine learning and psychology suggests that salient e...

Logistic Ensemble Models

Predictive models that are developed in a regulated industry or a regula...

Global and Local Interpretation of black-box Machine Learning models to determine prognostic factors from early COVID-19 data

The COVID-19 corona virus has claimed 4.1 million lives, as of July 24, ...

Transparent Interpretation with Knockouts

How can we find a subset of training samples that are most responsible f...

Interactive slice visualization for exploring machine learning models

Machine learning models fit complex algorithms to arbitrarily large data...

Explaining Differences in Classes of Discrete Sequences

While there are many machine learning methods to classify and cluster se...

Please sign up or login with your details

Forgot password? Click here to reset