DeepAI AI Chat
Log In Sign Up

Interpretable Machine Learning Approaches to Prediction of Chronic Homelessness

by   Blake VanBerlo, et al.

We introduce a machine learning approach to predict chronic homelessness from de-identified client shelter records drawn from a commonly used Canadian homelessness management information system. Using a 30-day time step, a dataset for 6521 individuals was generated. Our model, HIFIS-RNN-MLP, incorporates both static and dynamic features of a client's history to forecast chronic homelessness 6 months into the client's future. The training method was fine-tuned to achieve a high F1-score, giving a desired balance between high recall and precision. Mean recall and precision across 10-fold cross validation were 0.921 and 0.651 respectively. An interpretability method was applied to explain individual predictions and gain insight into the overall factors contributing to chronic homelessness among the population studied. The model achieves state-of-the-art performance and improved stakeholder trust of what is usually a "black box" neural network model through interpretable AI.


page 1

page 2

page 3

page 4


Interpretable models for extrapolation in scientific machine learning

Data-driven models are central to scientific discovery. In efforts to ac...

CRNN: A Joint Neural Network for Redundancy Detection

This paper proposes a novel framework for detecting redundancy in superv...

Mapping the Ictal-Interictal-Injury Continuum Using Interpretable Machine Learning

IMPORTANCE: An interpretable machine learning model can provide faithful...

Convolutional Neural Network for Elderly Wandering Prediction in Indoor Scenarios

This work proposes a way to detect the wandering activity of Alzheimer's...

Manipulating and Measuring Model Interpretability

Despite a growing body of research focused on creating interpretable mac...