Monte Carlo Tree Search for Interpreting Stress in Natural Language

04/17/2022
by   Kyle Swanson, et al.
Stanford University
0

Natural language processing can facilitate the analysis of a person's mental state from text they have written. Previous studies have developed models that can predict whether a person is experiencing a mental health condition from social media posts with high accuracy. Yet, these models cannot explain why the person is experiencing a particular mental state. In this work, we present a new method for explaining a person's mental state from text using Monte Carlo tree search (MCTS). Our MCTS algorithm employs trained classification models to guide the search for key phrases that explain the writer's mental state in a concise, interpretable manner. Furthermore, our algorithm can find both explanations that depend on the particular context of the text (e.g., a recent breakup) and those that are context-independent. Using a dataset of Reddit posts that exhibit stress, we demonstrate the ability of our MCTS algorithm to identify interpretable explanations for a person's feeling of stress in both a context-dependent and context-independent manner.

READ FULL TEXT VIEW PDF

Authors

page 9

page 10

05/31/2021

An Exploratory Analysis of the Relation Between Offensive Language and Mental Health

In this paper, we analyze the interplay between the use of offensive lan...
11/23/2020

Detection and Classification of mental illnesses on social media using RoBERTa

Given the current social distancing regulations across the world, social...
11/19/2018

Understanding and Measuring Psychological Stress using Social Media

A body of literature has demonstrated that users' mental health conditio...
05/23/2022

Symptom Identification for Interpretable Detection of Multiple Mental Disorders

Mental disease detection (MDD) from social media has suffered from poor ...
11/03/2020

Detecting Early Onset of Depression from Social Media Text using Learned Confidence Scores

Computational research on mental health disorders from written texts cov...
03/12/2020

Natural Language Interaction to Facilitate Mental Models of Remote Robots

Increasingly complex and autonomous robots are being deployed in real-wo...
03/06/2018

Explain Yourself: A Natural Language Interface for Scrutable Autonomous Robots

Autonomous systems in remote locations have a high degree of autonomy an...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Disabilities associated with mental health conditions pose a significant challenge for many people around the world stauder2010worldwide; de2013social; chen2018mood. To help people suffering from these conditions, it is crucial to identify those who are experiencing a mental health condition and understand the underlying causes.

Natural language processing (NLP) can help by analyzing a person’s mental state based on the text they have written. Previous studies turcan2019dreaddit; demszky2020goemotions; gjurkovic2020pandora; ansari2021data have demonstrated the ability of NLP models to process social media posts and predict stress, depression, and a range of emotions. These methods, however, are not able to explain why the person might be feeling the way they are, even if that information is clearly contained in the text analyzed by the model.

In this work, we seek to explain the underlying causes of a person’s mental state from their writing. We formulate such an explanation as a small set of phrases from the text that is sufficient to explain the person’s mental state. We wish to identify two complementary types of explanations: those that are particular to the situation the person is in, which we call context-dependent, and those that could appear across different contexts, which we call context-independent. Figure 1 shows an illustrating example. Identifying both types of explanations not only enhances our understanding of the underlying sources of a person’s mental state but also provides insights into how one’s mental state can be affected by general and specific causes.

[fontupper=] r/Relationships: I can’t believe this. My boyfriend just cheated on me and then he bragged about it on twitter. What kind of a messed up person would do that? I’m so angry with him and I’m sure we’re going to have a huge fight about this when I see him tomorrow.

Figure 1: A fictitious example of text exhibiting stress in the relationships context and two explanations for that stress. The explanation in blue is context-dependent (specific to relationships) while the explanation in red is context-independent (general to any disagreement).

To this end, we develop a novel Monte Carlo tree search (MCTS) algorithm that can effectively identify explanations that are either context-dependent or context-independent by leveraging the semantic capabilities of trained NLP models. We, both quantitatively and qualitatively, demonstrate the efficacy of this approach to explain a person’s mental state using a dataset of Reddit posts that exhibit stress turcan2019dreaddit.

2 Related Work

Mental Health Prediction.

Previous studies have tackled the task of mental health disability classification, using methods ranging from classical supervised techniques such as SVMs, logistic regression, Naive Bayes, MLPs, and decision trees to deeper models such as CNNs and GRUs

turcan2019dreaddit; gjurkovic2020pandora; ansari2021data; depsign-acl

. Other approaches utilize pre-trained, large language models with fine-tuning on specific mental health datasets

ji2021mentalbert; matovsevic2021stressformers; mauriello2021sad, which takes advantage of models trained on significantly larger datasets to speed up training and increase accuracy. turcan2019dreaddit specifically focus on the task of stress prediction in Reddit posts, and they show that large BERT-based models outperform smaller models such as CNNs and logistic regression.

NLP Explainability. Explainability in NLP is an emerging topic of interest as language models have become larger and more accurate at the expense of reduced interpretability. Common methods for explainability include feature importance reporting across lexical or latent features danilevsky2020survey, model-agnostic approaches that extract post-hoc explanations ribeiro2016model, and analogy-based explanations croce2019auditing. Prior works have also focused on rationale identification lei2016rationalizing and text matching rationalization swanson2020rationalizing, where models are designed to select small, interpretable segments of text when making predictions. Attention has also been used as a form of interpretability, but attention weights do not always correlate with impact on the model’s prediction, potentially limiting their usefulness serrano2019attention. In this work, we propose to use Monte Carlo tree search silver2016mastering; chaudhry2018feature; jin2020multi; albrecht2021interpretable; pmlr-v139-yuan21c as a post-hoc explainability method that can be applied to any model to flexibly identify multiple types of explanations for a model’s predictions.

3 The Dreaddit Dataset

The Dreaddit dataset turcan2019dreaddit contains 3,553 Reddit posts that have human-annotated binary stress labels denoting whether a given text contains evidence of stress. Each post belongs to one of ten subreddits (e.g., “r/Relationships”), which we consider to be the context of the post. The posts are split into 2,838 train posts and 715 test posts. Figures 8 and 9 (see Appendix) show the distributions of the stress labels and subreddit categories for the train and test sets.

4 Method

Figure 2: A portion of the tree of explanations searched by MCTS for an example text. Red indicates the text that is currently included in the explanation. The root of the tree is an explanation with a single phrase containing all the text. Each node in the tree can be expanded by removing the first or last token of a phrase or by removing a token in the middle of the phrase (constrained by certain MCTS parameters). Once a minimum number of tokens has been reached, the resulting explanation is given a reward based on the predictions of the stress and context models.

We assume that we have access to a training corpus and a test corpus to train and evaluate our models, respectively. The training corpus, , is a set of tuples, where each tuple contains a text consisting of tokens, its corresponding stress indicator denoting whether contains evidence of stress, and a context label indicating the subreddit category the text belongs to. Similarly, we assume .

4.1 Classification of Stress and Context

We consider two types of classification tasks, namely binary stress classification and multi-class context (subreddit) classification. We refer to a model trained for the former task as a stressclassifier, which can be thought of as a function mapping a piece of text to a likelihood . We refer to a model trained for the latter as a context classifier, which can be thought of as a function mapping a piece of text

to a probability simplex

.

We build simple stress and context prediction models using Bernoulli and Multinomial Naive Bayes, Support Vector Machine

Platt99probabilisticoutputs

, and Multilayer Perceptron

hinton_mlp

. All of these models use vectors of word counts

222We use CountVectorizer from scikit-learn fit on the training set with all default parameters. as inputs. We also build large BERT-based models by adding a classification layer on top of the MentalRoBERTa model of ji2021mentalbert and then fine-tuning the model on the training set.

4.2 Definition of an Explanation

An interpretable explanation for a person’s stress should consist of a small set of phrases from the full text that captures the core reasons behind the stress discussed within the text.

Formally, for a given piece of text in the corpus that is labeled as stressed (), we define an explanation as a set of phrases where each phrase is a set of contiguous tokens in the text, that is, for some . Furthermore, the phrases must be non-overlapping, which means that . In order to ensure interpretability, the explanation must satisfy three conditions.

a. Phrase count: , meaning the explanation must contain at most phrases. Too many phrases would impede interpretability.

b. Phrase length: , meaning each phrase must have at least tokens, preventing phrases that are too short to carry any meaning.

c. Proportion of tokens: where is the proportion of tokens in the text that are included in the explanation and are lower and upper bounds on the proportion of tokens in the explanation. This constrains the overall verbosity of the explanation to a reasonable range.

4.3 Context-Dependent and Independent Explanations of Stress

We are interested in identifying two specific types of explanations for stress: one that depends on the context of the text and one that is independent of that context. We will refer to the context-dependent explanation as and to the context-independent explanation as .

In both cases, since the explanation must explain the stress in the text, the stress must be evident from just the text contained in the phrases of the explanation. We can verify this by using our stress classification model. Specifically, we want an explanation such that the average stress prediction across the phrases of the explanation is close to . Hence for both and , we want

where is the average stress across the phrases of the explanation.

However, the phrases of the context-dependent explanation should indicate the context of the text while the context-independent explanation should not. We enforce this by examining the entropy of the predictions of our context classification model. If the phrases of an explanation have low entropy, then the model is relatively sure of the context; hence, that explanation is context-dependent. If the entropy is high, then the model is unsure of the context and the explanation is context-independent. Formally, if we define

as the average Shannon entropy of the context predictions across phrases, we want and where

is the maximum entropy (viz., entropy of a uniform distribution over contexts).

4.4 Finding Explanations with MCTS

We use the MCTS framework established in silver2016mastering, but we modify the search tree and the reward function to suite our purposes (see Figure 2). Each node in the tree represents an explanation . The root of the tree represents the whole text piece as a single phrase, i.e., . When the search is at a given node in the tree, there are two options for expanding the next node: (i) remove the first or last token in any phrase, as long as the shortened phrase still contains at least tokens, or (ii) remove a token in the middle of a phrase, thus breaking it into two phrases, as long as both resulting phrases have at least tokens and the total number of phrases does not exceed .

The search continues to expand nodes in the tree until either the current node cannot be expanded using either of the two rules above or the explanation at the current node contains too few tokens, i.e., . This node serves as a leaf node and is given a reward equal to

for some and . We use to select for high entropy (context-independent) explanations and to select for low entropy (context-dependent) explanations. This reward is propagated back to all the nodes on the path from the root to this leaf node according to the update rules from silver2016mastering. After the search is complete, the best explanation is selected as

which means is the explanation in the search tree that maximizes the reward while satisfying the condition on the maximum proportion of tokens. The other interpretability conditions are guaranteed by the rules of the search tree expansion.

Model Precision Recall F-1 Accuracy
Bernoulli NB 0.69 0.84 0.75 0.72
Multinomial NB 0.68 0.87 0.76 0.72
SVM 0.71 0.77 0.74 0.72
MLP 0.71 0.74 0.73 0.71
MentalRoBERTaFT 0.78 0.90 0.84 0.82
Table 1: Performances of stress classifiers on the test set of Dreaddit. While non-neural classifiers could not surpass 72% accuracy, the MentalRoBERTaFT model fine-tuned on the Dreaddit train set yielded 82% accuracy. Here, the superscript FT denotes that the model was fine-tuned.

5 Experiments

All of our experiments were run on the Dreaddit dataset. We report results of our stress and context classification models and share findings of our MCTS explanation algorithm.

Model Precision Recall F-1 Accuracy
Bernoulli NB 0.81 0.75 0.76 0.80
Multinomial NB 0.77 0.75 0.75 0.79
SVM 0.76 0.72 0.74 0.76
MLP 0.78 0.78 0.78 0.79
MentalRoBERTaFT 0.85 0.86 0.86 0.87
Table 2: Performances of context classifiers. We restricted our focus to three subreddits: “anxiety,” “assistance,” “relationships.” The fine-tuned MentalRoBERTaFT model yielded the best results with 87% accuracy.

5.1 Classification

As Table 1

illustrates, basic stress classification models, such as Naive Bayes classifiers, SVMs, and MLPs, performed reasonably on the test set of

Dreaddit. The MentalRoBERTaFT model for stress fine-tuned on the training set of Dreaddit

for five epochs, however, was able to outperform all the other models, achieving an accuracy score of 82% and demonstrating the efficacy of the pre-training on mental health data

333In contrast, the RoBERTa model trained from scratch achieved an accuracy score of almost 80%.. Our results on the stress classification task are consistent with those of turcan2019dreaddit. Table 2 reports the performance of various models on the multi-class subreddit category classification. Here, we limited our attention to three categories, namely “anxiety,” “assistance,” and “relationships.” The Reddit posts in these categories embody various distinct everyday, financial, and interpersonal stress factors, but at the same time, they seem to have common (context-independent) stress elements. In this context classification task, all models were able to go beyond the 75% accuracy level, but MentalRoBERTaFT yielded the highest accuracy.

5.2 Explainability

We demonstrate our MCTS approach to explainability using the same three categories as above. We use stress and context classification models implemented with Multinomial NB, MLP, and MentalRoBERTaFT. For each of these models, we apply MCTS to identify explanations for each of the 166 test texts that is labeled as stressed and belongs to one of our three categories. We use the interpretability conditions , , , and for all experiments444These choices are arbitrary and could easily be changed., and we use except where otherwise noted.

Original Dependent Independent
MNB S 0.850 0.317 0.706 0.190 0.617 0.124
E 0.047 0.140 0.274 0.181 0.942 0.086
MLP S 0.725 0.383 0.512 0.194 0.546 0.145
E 0.214 0.274 0.766 0.163 1.067 0.022
MRB S 0.878 0.324 0.830 0.220 0.430 0.273
E 0.042 0.124 0.019 0.018 0.640 0.171
Table 3:

Stress (S) and context entropy (E) for original text, context-dependent explanation, and context-independent explanation for the Multinomial Naive Bayes (MNB), Multilayer Perceptron (MLP), and Mental RoBERTa (MRB) models. Results were generated through MCTS with stress and context entropy averaged over the test set. The Wilcoxon signed rank test

wilcoxon between dependent and independent entropy is for all models, indicating a very significant difference as desired.
Figure 3: Histogram of stress scores for the original text and for the context-dependent and context-independent explanations extracted by our MCTS algorithm using an MLP model. Although stress is often higher in the original text than in the extracted explanations, the explanations still maintain a meaningful amount of stress.
Figure 4: Histogram of context entropy for the original text and for the context-dependent and context-independent explanations extracted by our MCTS algorithm using an MLP model. The context-independent explanations clearly have much higher context entropy than the context-dependent explanations as desired.

We quantitatively evaluate the explanations produced by MCTS. In Table 3, we show the average stress and context entropy scores of the original text and of the context-dependent and context-independent explanations. Our method is able to maintain a reasonably high and consistent level of stress across the explanations while modulating the context entropy appropriately for the two different types of explanations. This indicates that our approach can identify both context-dependent and context-independent sources of stress.

Figures 3 and 4 further illustrate this result for the MLP model by showing the full distribution of stress and context entropy scores across the test examples. Figures 5, 6, and 7 in the Appendix show the stress and context entropy distributions for all three models and for different values of . Lower increases stress but decreases the difference in entropy between the two types of explanations while higher decreases stress but increases the difference in entropy. This shows the flexibility of MCTS to select different types of explanations without retraining the classifiers.

Furthermore, we qualitatively demonstrate our approach. Tables 4, 5, and 6 in the Appendix show examples from each of the three subreddits that illustrate how our method captures different underlying sources of stress in an interpretable manner.

6 Conclusion

We propose a novel interpretability method for explaining stress in context-dependent and independent manners using Monte Carlo tree search. We demonstrate the effectiveness of our method by extracting both types of explanations from Reddit posts that exhibit stress. Although this work focuses on stress, our MCTS-based explanation framework is extremely flexible and can be applied to a wide variety of NLP models and prediction problems simply by specifying the appropriate reward function and interpretability conditions for the search tree. As in our work, the reward function can include multiple objectives with different weights, making it possible to extract a variety of explanations for added interpretability. Future work should further explore the range of explanations enabled by our framework. We hope that our explanation framework can improve understanding of the root causes of mental health conditions as well as provide interpretability for a variety of NLP tasks.

Acknowledgements

We would like to thank Margalit Glasgow, Masha Karelina, Megha Patel, Biscuit Russell, and Tayfun M. H. Mezarci for helpful comments and discussions. Swanson and Hsu gratefully acknowledge the support of the Knight-Hennessy Scholarship, Hsu gratefully acknowledges the support of the NSF GRFP, and Suzgun gratefully acknowledges the support of a Johann, Thales, Williams & Co. Graduate Fellowship. The authors also thank Dan Jurafsky for his support. The experiments presented in this paper were run on the Stanford NLP Cluster. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of Stanford University. All errors remain our own.

References

Appendix A Appendix

a.1 Additional Stress and Context Entropy Results

Figures 5, 6, and 7 show the stress and context entropy distributions of the original text and the context-dependent and context-independent explanations across the 166 stressed test examples in the “anxiety,” “assistance,” and “relationships” subreddits for the Multinomial Naive Bayes, Multilayer Perceptron, and MentalRoBERTaFT models, respectively. For the Multinomial Naive Bayes and Multilayer Perceptron models, we experimented with , with higher weighting context entropy more than stress in the MCTS reward function. For the MentalRoBERTaFT model, we used .

Figure 5: Histograms of stress and context entropy scores from the Multinomial Naive Bayes model for the original text and for the context-dependent and context-independent explanations extracted by our MCTS algorithm. The left column shows stress scores while the right column shows context entropy scores. From top to bottom, the rows show , , and , where controls the balance between stress and context entropy in the MCTS reward function. Higher places less emphasis on stress and more emphasis on context entropy, resulting in a greater difference between context-dependent and context-independent entropy scores at the cost of lower stress.
Figure 6: Histograms of stress and context entropy scores from the Multilayer Perceptron model for the original text and for the context-dependent and context-independent explanations extracted by our MCTS algorithm. The left column shows stress scores while the right column shows context entropy scores. From top to bottom, the rows show , , and , where controls the balance between stress and context entropy in the MCTS reward function. Higher places less emphasis on stress and more emphasis on context entropy, resulting in a greater difference between context-dependent and context-independent entropy scores at the cost of lower stress.
Figure 7: Histograms of stress and context entropy scores from the MentalRoBERTaFT model for the original text and for the context-dependent and context-independent explanations extracted by our MCTS algorithm. The left plot shows stress scores while the right plot shows context entropy scores, both for . Interestingly, the distributions are somewhat different from those of the Multinomial Naive Bayes (Figure 5) and Multilayer Perceptron (Figure 6) models. MentalRoBERTaFT is capable of selecting different context-dependent and context-independent explanations as measured by entropy, but the model generally assigns more stress to context-dependent explanations than context-independent explanations, perhaps hinting at a meaningful difference between the types of explanations in terms of stress content.

a.2 Data Distribution

In Figure 8 and Figure 9, we show the data distribution of our stress and context (subreddit) labels.

Figure 8: Training and test set stress label distribution.
Figure 9: Training and test set subreddit label distribution.

a.3 MentalRoBERTa

MentalRoBERTa is a RoBERTa-based language model liu2019roberta that was pre-trained on a corpus of 13.7M sentences from Reddit that were posted on mental health-related subreddits, including, but not limited to, “r/Anxiety” and “r/Depression”. When training classifiers for stress and context classification tasks, we used the pre-trained MentalRoBERTa model on Hugging Face’s model repository, available at https://huggingface.co/mental, and fine-tuned the model on the Dreaddit dataset, using either the stress or context labels, for five epochs with a learning rate of 1e-4.

a.4 Qualitative Examples

In Tables 4, 5, and 6, we show qualitative examples of our MCTS method for explainability, with examples from each of three subreddits—“anxiety,” “assistance,” and “relationships”—from both the MLP and MentalRoBERTaFT models.

Model Category             Text (subreddit = “r/Anxiety”) Stress Entropy
Original Lately I’ve just been having that terrible feeling in the pit of my stomach and also a feeling of nausea like I constantly need to throw up. I’m sleeping normal but still feeling so tired and drained and can’t really focus at work and because of that I feel like my work performance is slipping up. I am constantly afraid that I’m going to lose my job and that my manager hates me. This has been happening so much more frequently. About a week ago my doc gave me prozac (once a day) and xanax (only as needed) prescriptions and I feel like it’s helped with the bigger attacks and some dark thoughts but now its almost like just a little constant anxiety all the time and it sucks. 1.000 0.000
MLP Dependent Lately I’ve just been having that terrible feeling in the pit of my stomach and also a feeling of nausea like I constantly need to throw up. I’m sleeping normal but still feeling so tired and drained and can’t really focus at work and because of that I feel like my work performance is slipping up. I am constantly afraid that I’m going to lose my job and that my manager hates me. This has been happening so much more frequently. About a week ago my doc gave me prozac (once a day) and xanax (only as needed) prescriptions and I feel like it’s helped with the bigger attacks and some dark thoughts but now its almost like just a little constant anxiety all the time and it sucks. 0.933 0.300

3.0pt2-5.51.5

plus1fil minus1fil

Independent Lately I’ve just been having that terrible feeling in the pit of my stomach and also a feeling of nausea like I constantly need to throw up. I’m sleeping normal but still feeling so tired and drained and can’t really focus at work and because of that I feel like my work performance is slipping up. I am constantly afraid that I’m going to lose my job and that my manager hates me. This has been happening so much more frequently. About a week ago my doc gave me prozac (once a day) and xanax (only as needed) prescriptions and I feel like it’s helped with the bigger attacks and some dark thoughts but now its almost like just a little constant anxiety all the time and it sucks. 0.489 1.045
Mental RoBERTaFT Dependent Lately I’ve just been having that terrible feeling in the pit of my stomach and also a feeling of nausea like I constantly need to throw up. I’m sleeping normal but still feeling so tired and drained and can’t really focus at work and because of that I feel like my work performance is slipping up. I am constantly afraid that I’m going to lose my job and that my manager hates me. This has been happening so much more frequently. About a week ago my doc gave me prozac (once a day) and xanax (only as needed) prescriptions and I feel like it’s helped with the bigger attacks and some dark thoughts but now its almost like just a little constant anxiety all the time and it sucks. 0.998 0.006

3.0pt2-5.51.5

plus1fil minus1fil

Independent Lately I’ve just been having that terrible feeling in the pit of my stomach and also a feeling of nausea like I constantly need to throw up. I’m sleeping normal but still feeling so tired and drained and can’t really focus at work and because of that I feel like my work performance is slipping up. I am constantly afraid that I’m going to lose my job and that my manager hates me. This has been happening so much more frequently. About a week ago my doc gave me prozac (once a day) and xanax (only as needed) prescriptions and I feel like it’s helped with the bigger attacks and some dark thoughts but now its almost like just a little constant anxiety all the time and it sucks. 0.670 0.627
Table 4: Qualitative examples from our MCTS explainability method for a post in the “r/Anxiety” subreddit. We show the full original text along with the context-dependent and context-independent explanations selected by MCTS using both the MLP and MentalRoBERTaFT classifiers.
Model Category             Text (subreddit = “r/Assistance”) Stress Entropy
Original I can’t ask my family because they don’t have the kind of money to help me. If anyone can help me even just a little bit, I would be ridiculously grateful. I just can’t even express what this has done to us. Yes, the bills are paid, but now we’re so anxious that we barely leave the house due to panic attacks. I’ve done things like ubereats but $15 here and there isn’t even making a dent in what I need. 0.995 0.616
MLP Dependent I can’t ask my family because they don’t have the kind of money to help me. If anyone can help me even just a little bit, I would be ridiculously grateful. I just can’t even express what this has done to us. Yes, the bills are paid, but now we’re so anxious that we barely leave the house due to panic attacks. I’ve done things like ubereats but $15 here and there isn’t even making a dent in what I need 0.723 0.640

3.0pt2-5.51.5

plus1fil minus1fil

Independent I can’t ask my family because they don’t have the kind of money to help me. If anyone can help me even just a little bit, I would be ridiculously grateful. I just can’t even express what this has done to us. Yes, the bills are paid, but now we’re so anxious that we barely leave the house due to panic attacks. I’ve done things like ubereats but $15 here and there isn’t even making a dent in what I need. 0.584 1.064
Mental RoBERTaFT Dependent I can’t ask my family because they don’t have the kind of money to help me. If anyone can help me even just a little bit, I would be ridiculously grateful. I just can’t even express what this has done to us. Yes, the bills are paid, but now we’re so anxious that we barely leave the house due to panic attacks. I’ve done things like ubereats but $15 here and there isn’t even making a dent in what I need. 0.999 0.005

3.0pt2-5.51.5

plus1fil minus1fil

Independent I can’t ask my family because they don’t have the kind of money to help me. If anyone can help me even just a little bit, I would be ridiculously grateful. I just can’t even express what this has done to us. Yes, the bills are paid, but now we’re so anxious that we barely leave the house due to panic attacks. I’ve done things like ubereats but $15 here and there isn’t even making a dent in what I need. 0.478 0.518
Table 5: Qualitative examples from our MCTS explainability method for a post in the “r/Assistance” subreddit. We show the full original text along with the context-dependent and context-independent explanations selected by MCTS using both the MLP and MentalRoBERTaFT classifiers.
Model Category             Text (subreddit = “r/Relationships”) Stress Entropy
Original We seem to be talking and accidentally being together more often in school, making what I think are feelings towards her only stronger. I can’t bring myself to bring this up with her because I’m scared that we will have a repeat of February again. I love her so much but I feel that if I have these feelings about other girls am I really devoted to her? This is in no way her fault, she has done nothing to deserve my questioning of my decision, this is my problem and mine alone. I am reluctant to bring this up with her because I’m worried that she might break up with me because I do truly still love her I’m just wondering if this other girl is a passing thought more focused than earlier and something I can overcome. 0.999 0.000
MLP Dependent We seem to be talking and accidentally being together more often in school, making what I think are feelings towards her only stronger. I can’t bring myself to bring this up with her because I’m scared that we will have a repeat of February again. I love her so much but I feel that if I have these feelings about other girls am I really devoted to her? This is in no way her fault, she has done nothing to deserve my questioning of my decision, this is my problem and mine alone. I am reluctant to bring this up with her because I’m worried that she might break up with me because I do truly still love her I’m just wondering if this other girl is a passing thought more focused than earlier and something I can overcome. 0.734 0.437

3.0pt2-5.51.5

plus1fil minus1fil

Independent We seem to be talking and accidentally being together more often in school, making what I think are feelings towards her only stronger. I can’t bring myself to bring this up with her because I’m scared that we will have a repeat of February again. I love her so much but I feel that if I have these feelings about other girls am I really devoted to her? This is in no way her fault, she has done nothing to deserve my questioning of my decision, this is my problem and mine alone. I am reluctant to bring this up with her because I’m worried that she might break up with me because I do truly still love her I’m just wondering if this other girl is a passing thought more focused than earlier and something I can overcome. 0.510 1.043
Mental RoBERTaFT Dependent We seem to be talking and accidentally being together more often in school, making what I think are feelings towards her only stronger. I can’t bring myself to bring this up with her because I’m scared that we will have a repeat of February again. I love her so much but I feel that if I have these feelings about other girls am I really devoted to her? This is in no way her fault, she has done nothing to deserve my questioning of my decision, this is my problem and mine alone. I am reluctant to bring this up with her because I’m worried that she might break up with me because I do truly still love her I’m just wondering if this other girl is a passing thought more focused than earlier and something I can overcome. 0.998 0.030

3.0pt2-5.51.5

plus1fil minus1fil

Independent We seem to be talking and accidentally being together more often in school, making what I think are feelings towards her only stronger. I can’t bring myself to bring this up with her because I’m scared that we will have a repeat of February again. I love her so much but I feel that if I have these feelings about other girls am I really devoted to her? This is in no way her fault, she has done nothing to deserve my questioning of my decision, this is my problem and mine alone. I am reluctant to bring this up with her because I’m worried that she might break up with me because I do truly still love her I’m just wondering if this other girl is a passing thought more focused than earlier and something I can overcome. 0.712 0.444
Table 6: Qualitative examples from our MCTS explainability method for a post in the “r/Relationships” subreddit. We show the full original text along with the context-dependent and context-independent explanations selected by MCTS using both the MLP and MentalRoBERTaFT classifiers.