Interpretable Models of Human Interaction in Immersive Simulation Settings

09/24/2019
by   Nicholas Hoernle, et al.
0

Immersive simulations are increasingly used for teaching and training in many societally important arenas including healthcare, disaster response and science education. The interactions of participants in such settings lead to a complex array of emergent outcomes that present challenges for analysis. This paper studies a central element of such an analysis, namely the interpretability of models for inferring structure in time series data. This problem is explored in the context of modeling student interactions in an immersive ecological-system simulation. Unsupervised machine learning is applied to data on system dynamics with the aim of helping teachers determine the effects of students' actions on these dynamics. We address the question of choosing the optimal machine learning model, considering both statistical information criteria and interpretabilty quality. Our approach adapts two interpretability tests from the literature that measure the agreement between the model output and human judgment. The results of a user study show that the models that are the best understood by people are not those that optimize information theoretic criteria. In addition, a model using a fully Bayesian approach performed well on both statistical measures and on human-subject tests of interpretabilty, making it a good candidate for automated model selection that does not require human-in-the-loop evaluation. The results from this paper are already being used in the classroom and can inform the design of interpretable models for a broad range of socially relevant domains.

READ FULL TEXT

page 2

page 3

research
06/29/2017

Interpretability via Model Extraction

The ability to interpret machine learning models has become increasingly...
research
07/11/2023

A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI

Explainable Artificial Intelligence (XAI) has gained significant attenti...
research
06/22/2019

Model Bridging: To Interpretable Simulation Model From Neural Network

The interpretability of machine learning, particularly for deep neural n...
research
09/01/2023

Subjectivity in Unsupervised Machine Learning Model Selection

Model selection is a necessary step in unsupervised machine learning. De...
research
06/25/2018

Why Interpretability in Machine Learning? An Answer Using Distributed Detection and Data Fusion Theory

As artificial intelligence is increasingly affecting all parts of societ...
research
05/29/2018

Human-in-the-Loop Interpretability Prior

We often desire our models to be interpretable as well as accurate. Prio...
research
08/01/2018

Model-order selection in statistical shape models

Statistical shape models enhance machine learning algorithms providing p...

Please sign up or login with your details

Forgot password? Click here to reset