What Symptoms and How Long? An Interpretable AI Approach for Depression Detection in Social Media

05/18/2023
by   Junwei Kuang, et al.
0

Depression is the most prevalent and serious mental illness, which induces grave financial and societal ramifications. Depression detection is key for early intervention to mitigate those consequences. Such a high-stake decision inherently necessitates interpretability, which most existing methods fall short of. To connect human expertise in this decision-making, safeguard trust from end users, and ensure algorithm transparency, we develop an interpretable deep learning model: Multi-Scale Temporal Prototype Network (MSTPNet). MSTPNet is built upon the emergent prototype learning methods. In line with the medical practice of depression diagnosis, MSTPNet differs from existing prototype learning models in its capability of capturing the depressive symptoms and their temporal distribution such as frequency and persistence of appearance. Extensive empirical analyses using real-world social media data show that MSTPNet outperforms state-of-the-art benchmarks in depression detection, with an F1-score of 0.851. Moreover, MSTPNet interprets its prediction by identifying what depression symptoms the user presents and how long these related symptoms last. We further conduct a user study to demonstrate its superiority over the benchmarks in interpretability. Methodologically, this study contributes to extant literature with a novel interpretable deep learning model for depression detection in social media. Our proposed method can be implemented in social media platforms to detect depression and its symptoms. Platforms can subsequently provide personalized online resources such as educational and supporting videos and articles, or sources for treatments and social support for depressed patients.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2022

Care for the Mind Amid Chronic Diseases: An Interpretable AI Approach Using IoT

Health sensing for chronic disease management creates immense benefits f...
research
06/28/2020

Interpretable Deepfake Detection via Dynamic Prototypes

Deepfake is one notorious application of deep learning research, leading...
research
01/14/2021

Interpretable Multi-Head Self-Attention model for Sarcasm Detection in social media

Sarcasm is a linguistic expression often used to communicate the opposit...
research
06/26/2022

Explainable and High-Performance Hate and Offensive Speech Detection

The spread of information through social media platforms can create envi...
research
03/01/2021

Deep Bag-of-Sub-Emotions for Depression Detection in Social Media

This paper presents the Deep Bag-of-Sub-Emotions (DeepBoSE), a novel dee...
research
09/20/2023

ProtoExplorer: Interpretable Forensic Analysis of Deepfake Videos using Prototype Exploration and Refinement

In high-stakes settings, Machine Learning models that can provide predic...
research
03/06/2023

Depression Detection Using Digital Traces on Social Media: A Knowledge-aware Deep Learning Approach

Depression is a common disease worldwide. It is difficult to diagnose an...

Please sign up or login with your details

Forgot password? Click here to reset