Text-based Depression Detection: What Triggers An Alert

04/08/2019 ∙ by Heinrich Dinkel, et al. ∙ Shanghai Jiao Tong University 0

Recent advances in automatic depression detection mostly derive from modality fusion and deep learning methods. However multi-modal approaches insert significant difficulty in data collection phase while deep learning methods' opaqueness lowers its credibility. This current work proposes a text-based multi-task BLSTM model with pretrained word embeddings. Our method outputs depression presence results as well as predicted severity score, culminating a state-of-the-art F1 score of 0.87, outperforming previous multi-modal studies. We also achieve the lowest RMSE compared with currently available text-based approaches. Further, by utilizing a per time step attention mechanism we analyse the sentences/words that contribute most in predicting the depressed state. Surprisingly, `unmeaningful' words/paralinguistic information such as `um' and `uh' are the indicators to our model when making a depression prediction. It is for the first time revealed that fillers in a conversation trigger a depression alert for a deep learning model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Depression is an illness that affects, knowingly or unknowingly, millions of people worldwide. Efficient and effective automatic depression diagnosis can be of substantial benefits. However, this is an extremely difficult task since a variety of complicated symptoms are reported and subjective clinical interview is the golden standard. Recent enhancement is mostly derived from multi-modal fusion and deep learning methods. Similar to a clinical interview in which a psychiatrist determines the patient’s mental state via his language and behaviours, automatic detection could be sourced from different signals, namely video, audio and text. Of the three modalities, audio features are mostly explored individually while text features by itself are rarely investigated. Lately the multi-modal fashion has prompted more modality fusion studies [1][2]. While one could argue that more information will likely lead to a better model, using all possible aspects of multi-modal depression detection has its practical downsides. For instance, obtaining video recording consent might be a great obstacle in real-life situations, especially with mentally-ill patients. Hence this paper is guided by a principle that whether single modality could achieve similar performance to multi-modal models.

Some of the modal fusion studies have suggested superior performance of text features in depression detection, indicating the importance of semantic information [1]. Among the few attempts of text-based models, word embeddings are usually trained from scratch, which might be suboptimal due to the lack of large quantities of data [3]. Recently, general purpose text-embeddings such as ELMo[4] and BERT [5]

, which are pretrained on large datasets, become popular due to their performance on many natural language processing benchmarks. Therefore, the use of pretrained contextual sentence embeddings, namely ELMo and BERT are investigated in the current work for their usage in depression detection.

Previous automatic assessment often involves classification or regression models, depending on the main task being depression presence or severity prediction. Though various deep learning models[3] have been experimented, the assessment precision can still see great improvement. For severity prediction models, the mean absolute errors and root mean squared errors reported are particularly high. This again emphasizes the complexity of depression symptoms and the difficulty of precise predictions. Nevertheless, in health-related tasks, any true-false or positive-negative judgement could lead to severe outcomes. However due to the opaqueness of deep learning models, we often have no clue what goes wrong when a false prediction is made. Thus understanding the model is as critical as to enhance performance in such tasks.

Therefore, this paper mainly has two objectives: we firstly examine whether text features could achieve similar performance as do multi-modal methods; secondly we are interested in why the model makes certain predictions. Accordingly, our main contribution includes 1) a multi-task model design of combining detecting the presence of depression with predicting the severity; 2) substituting data-based word embeddings with pretrained text embeddings; 3) by applying attention mechanism we provide interpretations on which words or sentences trigger the model to believe a person is suffering from depression.

The rest of the paper is organized as follows. We provide a task overview with reference to relevant work and introduction to our model architecture in Section 2. Section 3 illustrates the experiments with different text embeddings and context settings. Analysis based on attention pooling are provided as interpretations of our model decisions in Section 4. Conclusions can be found in Section 5.

2 Task Overview

2.1 Dataset

Data was acquired from the publicly available Distress Analysis Interview Corpus - Wizard of Oz (WOZ-DAIC) [6, 7] database, which encompasses 107 training and 35 development speakers. An evaluation subset was also published, yet labels for the evaluation are not available, therefore all experiments were validated on the development subset. This database was previously used for the AVEC2017 challenge [8]

. 30 speakers within the training (28 %) and 12 within the development (34 %) set are classified to have depression (PHQ8 binary value is set to 1). Two labels are provided for each participant: a binary diagnosis of depressed/healthy and the patient’s eight-item

Patient Health Questionnaire score (PHQ-8) metric[9]. Consequently, automatic depression detection research based on this dataset can either predict the classification results or a severity score, to associate with the mental state label and PHQ-8 score.

Figure 1:

Train data PHQ-8 distribution with respect to each class. Mean of each class is represented as a point as well as its standard deviation

Analyzing the data in Figure 1 helps to understand the challenges involved when modelling this task. The AVEC2017 challenge paper [8] states that scores larger than are considered to be depressed, however as presented in Figure 1, no clear causal relationship between the PHQ-8 and the patient state can be made e.g., though there is a tendency for depressed patients to have higher PHQ-8 scores, a PHQ-8 score of > is no guarantee for a depressed participant. Especially in the boundary region of both classes at a score range of 9 to 11, some participants cannot be assigned to a class according to their PHQ-8 score. This is due to the fact that PHQ-8 score is a reference and the clinician has the final decision on the diagnosis. PHQ-8 scores might be helpful in making a prediction but we still need to combine with clinician’s decision. If a patient is not depressed, then the PHQ-8 score does not indicate its depression severity.

To sum up, two observations can be made: 1) the dataset itself is relatively insufficient; 2) the depression state and PHQ-8 score are correlated but one characteristic does not necessarily predict the other.

2.2 Feature Selection and Extraction

The WOZ-DAIC dataset encompasses three major media: video, audio and transcribed text data. Prior work on this dataset with better performance generally utilizes modality fusion method [1]. However, in [1] it is suggested that the key contribution is the addition of semantic information, which achieves a mean score of 0.81 individually. Hence in this work, we only incorporate the text data for the purpose of neat real-world application.

On the subject of text-based depression analysis, three different modelling settings ([3]) are widely used:

Context-free modelling uses each response of the participant as an independent sample, without information about the question, nor the time it was asked. This setting has the advantage of being easy to deploy in real world applications since predictions from single sentences can be made.

Context-dependent modelling requires the use of question-answer pairs, where each sample consists of a question asked and its corresponding answer.

Sequence modelling only models the patients responses in succession, without knowledge of the particular question asked.

In previous text-based work, work embeddings are usually trained from scratch. However, since depression data is hard to come by, using a model pretrained on larger datasets, unrelated to depression detection, could help alleviate this problem. In this work, we show that the use of pretrained word embeddings can lead to substantial performance gains. Standard Word2Vec models are usually trained on a shallow, two layer deep neural network architecture. While Word2Vec aims to capture the context of a specific sentence, it only considers the surrounding words as its training input, therefore does not capture the intrinsic meaning of a sentence. Recently, alternatives to Word2Vec became popular, specifically context-dependent sentence embeddings such as ELMo

[4] and not long ago BERT[5]. ELMo generates embeddings for a word based on the context it appears in, thus produces slight variations for each word occurrence. Subsequently, ELMo requires to be fed an entire sentence before generating an embedding. BERT[5]

similarly models sentences as vectors. Currently BERT is considered for many natural language processing (NLP) tasks to perform at a state-of-the-art level.

In our current work, raw text was firstly preprocessed, where tailing blanks were removed and every letter set to be lowercase. Meta information such as <laughter> or <sigh> are possibly helpful to the model, thus were not removed. Three different text embeddings are experimented: Word2Vec, ELMo and Bert:

Word2Vec One hundred dimensional Word2Vec [10] features were extracted using the gensim library [11]

with identical hyperparameters as in

[3].

ELMo ELMo uses a three layer bidirectional structure with nodes in each layer. We used the average of all three layer embeddings as our sentence representation.

BERT An embedding can be extracted from each of the twelve layers. Here, the penultimate layer was used to extract a dimensional sentence embedding. Instead of finetuning Bert or ELMo models, we directly extracted embedding from the publicly available models.

2.3 Model description

Classification PHQ-8-Regression
Model Feature Settings Pooling Pre Rec F1 MAE RMSE
C-CNN [2] WordVec Sequence Time 0.71 0.38 0.50 6.14 -
Gauss-Staircase [1] GloVe (Fusion) Context-Dep - - - 0.84 3.34 4.46
Gauss-Staircase [1] GloVe Context-Dep - - - 0.76 - -
BLSTM [3] Word2Vec Context-Free Time 0.71 0.5 0.59 7.02 9.43
BLSTM [3] Word2Vec Sequence Time 0.57 0.8 0.67 5.18 6.38
BLSTM[3] Audio+Text Sequence Time 0.71 0.83 0.77 5.1 6.37
C-CNN [2] Audio+Text+Video Sequence Time 0.71 0.83 0.77 3.67 -
Gauss-Staircase [1] Audio+Text+Video Context-Dep - - - 0.81 4.18 5.31
BLSTM Word2Vec Sequence Att 0.32 0.5 0.39 5.96 6.77
BLSTM ElMo Sequence Att 0.93 0.83 0.87 3.62 5.21
BLSTM BERT Sequence Att 0.89 0.85 0.87 4.18 5.51
BLSTM Fusion Sequence Att 0.86 0.81 0.83 3.68 5.16
Table 1:

Evaluation results of the proposed text-based attention models (bottom) compared to previous text based-based (top) and multi-modal (middle) approaches.

As previously stated, two labels are provided for each participant. Prior work on DAIC-WOZ dataset usually splits the tasks of depression presence detection (binary classification) [12] and severity score prediction (regression with PHQ-8 score) [13]. A few studies investigate both tasks e.g. in [2], but still treat the two separately: a classification and

severity prediction was achieved. However as seen in Section 2.1, the two characteristics are correlated but one cannot necessarily predict the other. Hence, both information sources are important in order to ascertain if the patient is ill. We thus propose a multi-task setting to combine the classification and regression tasks. Two outputs were thus constructed, one directly predicts the binary outcome of a participant being depressed, the other outputs the estimated PHQ-8-score.

(1)
(2)
(3)

For the multi-task loss (see Equation 3), we opt to use a combination of binary cross entropy (for classification, Equation 1) and huber loss (for regression, Equation 2). Here, represents the regressive model output, represents the binary model output,

is the sigmoid function,

is the PHQ-8 score and

is the binary ground truth. The huber loss can be seen as a compromise between mean average error (MAE, L1) and mean square error (MSE, L2), resulting in a robust behaviour to outliers. Both losses are summed up and backpropagated during training.

(4)

Previous text-based work in [3] solely relied on the last-timestep () as the response/query representation, further referred to as time pooling. However [14] has shown that time pooling is only sub-optimal, since the network belief changes over time. We therefore exclusively use attention as our model time-representation vector function. Attention is defined in Equation 4, where is the entire input sequence, are specific input and output features at time , is the learned attention weight vector, is the output of the concatenated BLSTM model at time and the weighted average representation. A simple per time step attention mechanism is utilized in this work. Given an input vector at time step , attention can be calculated as seen in Equation 4, where is the time-independent parameter vector used for scoring.

In addition to the novel multi-task approach and attention pooling method stated above, our proposed architecture in this work is a commonly used bidirectional long short term memory (LSTM) recurrent neural network structure (see

Table 2

). After each BLSTM layer we apply a recurrent dropout with probability 10 %. In sparse data scenarios such as depression detection, gradient recurrent units (GRU) networks are generally seen as a well performing alternative to LSTM networks. In this work, we internally ran GRU networks, but did not experience a performance enhancement, therefore exclusively used LSTM. The source code is publicly available

111www.github.com/richermans/text_based_depression.

Layer Input Output
BLSTM
BLSTM
BLSTM
Attention 2
Table 2: Proposed model architecture. The output of the last layer are two values, one for the regression (PHQ-8) and one for classification.

3 Experiments

Data preprocessing

The input data was preprocessed before training, where mean and variance of the training subset was calculated and subsequently applied on the development dataset. Training the models was done by running Adam optimization for at most 200 epochs. The initial learning rate was set to be

, which was reduced by a factor of if the cross-validation loss did not improve for at most epochs. If the learning rate reached a value below

training was terminated and the model producing the lowest error on the development set was chosen for evaluation. Regarding data handling, padding was avoided by choosing a batchsize of 1. Moreover, random oversampling over the minority class (depressed) was utilized in order to circumvent data sparsity problems. Furthermore, recurrent weights were initialized by the uniform xavier method, where samples are drawn from

, where and biases were set to zero.

Evaluation metric

For classification, macro precision and recall scores are used to calculate the average

-score. In terms of regression, the mean average error () and root mean square error () is used between the model prediction and the ground truth PHQ-8 score .

Results

Since the available amount of data can be considered insufficient, the results are often not directly reproduce-able. In order to somewhat circumvent this problem [3] proposed to gridsearch for every possible hyperparameter in order to ascertain a proper configuration. However, in our experience reproducibility cannot be guaranteed, even when fixing random seeds and hyper parameters. We therefore reported the best performing model, following the tradition in many previous studies. Our proposed setting can thus be seen as the optimal configuration for our experiments.

In this work we compared our sequence modelling approach to previous context-free and context-dependent approaches. The results of our models can be seen in Table 1. Fusion scoring refers to the mean score fusion of ELMo and BERT models respectively. It is indicated that our sequence model with pretrained text embeddings, either ELMo or BERT, has achieved a mean score of 0.87. This has outperformed other text-based approaches, and even multi-modal approaches. As the results of our experiments indicated, Word2Vec largely underperforms compared to ELMo and BERT approaches. Possible reasons are the limited dataset size, such that attention could not pick up meaningful text information.

4 Analysis

Attention mechanism is deliberately chosen since the attention weights over time could be interpretations for what sentences/words trigger the model to predict a patient is depressed or not. The attention weights () over time for each speaker are visualized in Figure 2.

It can be seen that attention in context of Word2Vec training does behave similar to mean pooling. In contrast, ELMo and BERT features exhibit a robust and strong performance. For those two features, we observe that for many depressed patients attention spikes at the first to second response (see the first two rows in Figure 2). Within the first responses the participant usually states his/her heritage or his/her current residence. This is a potential indicator that the model learned to correlate places with depression e.g., living in a metropolitan region might insert a potential influence on residents’ mood and mental state. Further it was investigated if the training dataset reveals a patients mental status given his/her heritage, but no such clue could be found.

Figure 2: Word2Vec, BERT and ELMo embedding‘s attention values for each development speaker. The x axis represents time.
# BERT ELMo
1 um um
2 i’m okay <laughter>
3 yeah yes
4 okay mm
5 except meeting that one woman mhm
6 or leaving my comfort i’m okay
7 doing a little bit of socializing so
8 feel a little but um
9 putting away more money before i retire uh
10 the hardest decision hmm
Table 3: Top 10 most commonly seen sentences indicating depression according to a peak in attention.

We deliberately choose the attention mechanism for this work in order to visualize our model‘s belief by searching for the most likely sentences triggering depression. These sentences were extracted by finding all peaks for an attention-weight sequence (). Specifically, in order to remove insignificant sentences, only peaks having a height of 80% of the maximum attention weight were considered in this search. An overview of all important sentences can be seen in Table 3. The results show that both features, ELMo and BERT focus on short, non-descript words such as ‘um’ as well as positive answers such as ‘yeah’ and ‘yes’. Interestingly, attention seldom focuses on sentences with meaningful content, such as previous traumatic experiences or sentences with an inherent negative connotation. Moreover, our proposed models are decisive in nature, meaning that for most depressed patients, the models stress single, specific sentences heavily (weights over ) and neglect the majority of patient responses. The more remarkable result is that this model is purely trained on text data, thus never actually heard those words.

5 Conclusion

This work proposed the use of multi-task modelling in conjunction with pretrained sentence embeddings, namely ELMo and BERT for modelling text-based depression. Analysis of ELMo and BERT models revealed a correlation between short, interpersonal sounds such as ‘um’ and the model performance, possibly indicating that in order to detect depression, one should focus on behavioural aspects of text and not necessarily on content. Furthermore the proposed models often emphasize and decide the mental state according to the first couple responses of the patient, rather than being indecisive.

Our proposed BLSTM model outperforms previous single model approaches in terms of classification scores, culminating in a score of . In terms of regression, our best model using ELMo features achieves a mean average error of , being the best in its class for sequential depression modelling.

References