A Text Classification Framework for Simple and Effective Early Depression Detection Over Social Media Streams

05/18/2019 ∙ by Sergio G. Burdisso, et al. ∙ INAOE UNSL 1

With the rise of the Internet, there is a growing need to build intelligent systems that are capable of efficiently dealing with early risk detection (ERD) problems on social media, such as early depression detection, early rumor detection or identification of sexual predators. These systems, nowadays mostly based on machine learning techniques, must be able to deal with data streams since users provide their data over time. In addition, these systems must be able to decide when the processed data is sufficient to actually classify users. Moreover, since ERD tasks involve risky decisions by which people's lives could be affected, such systems must also be able to justify their decisions. However, most standard and state-of-the-art supervised machine learning models (such as SVM, MNB, Neural Networks, etc.) are not well suited to deal with this scenario. This is due to the fact that they either act as black boxes or do not support incremental classification/learning. In this paper we introduce SS3, a novel supervised learning model for text classification that naturally supports these aspects. SS3 was designed to be used as a general framework to deal with ERD problems. We evaluated our model on the CLEF's eRisk2017 pilot task on early depression detection. Most of the 30 contributions submitted to this competition used state-of-the-art methods. Experimental results show that our classifier was able to outperform these models and standard classifiers, despite being less computationally expensive and having the ability to explain its rationale.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Traditionally, expert systems

have been used to deal with complex problems that require the ability of human experts to be solved. These intelligent systems usually need knowledge engineers to manually code all the facts and rules acquired from human experts through interviews, for the system’s

knowledge base (KB). Nonetheless, This manual process is very expensive and error-prone since the KB of a real expert system includes thousands of rules. This, added to the rise of big data and cheaper GPU-powered computing hardware, are causing a major shift in the development of these intelligent systems in which machine learning is increasingly gaining more popularity. In this context, this work introduces a machine learning framework, based on a novel white-box text classifier, for developing intelligent systems to deal with early risk detection (ERD) problems. In order to evaluate and analyze our classifier’s performance, we will focus on a relevant ERD task: early depression detection.

Depression detection is a major public health concern. Depression is a leading cause of disability and is a major contributor to the overall global burden of disease. Globally, the proportion of the population with depression in 2015 was estimated to be 4.4% (more than 332 million people). Depressive disorders are ranked as the single largest contributor to non-fatal health loss. More than 80% of this non-fatal disease burden occurs in low- and middle-income countries. Furthermore, between 2005 and 2015 the total estimated number of people living with depression was increased by 18.4%

(World Health Organization, 2017).

People with depression may experience a lack of interest and pleasure in daily activities, significant weight loss or gain, insomnia or excessive sleeping, lack of energy, inability to concentrate, feelings of worthlessness or excessive guilt and recurrent thoughts of death (American Psychiatric Association, 2013). As a matter of fact, depression can lead to suicide. Over 800.000 suicide deaths occur every year and it is the second leading cause of death in the 15-29 years-old range; that is, every 40 s a person dies due to suicide somewhere in the world (World Health Organization, 2014). In richer countries, three times as many men die of suicide than women do. Globally, suicides account for 50% of all violent deaths in men and 71% in women (World Health Organization, 2014). Suicide accounted for close to 1.5% of all deaths worldwide, bringing it into the top 20 leading causes of death in 2015 (World Health Organization, 2017). In the United States, as well as in other high-income countries, suicide is among the 10 leading causes of death (along with cancer, heart disease, stroke, and diabetes), additionally, from 2016 to 2017 the suicide rate increased by 3.7% (National Center for Health Statistics, 2019).

In this context, it is clear that the early risk recognition is a core component to ensure that people receive the care and social support they need. For many years, psychologists have used tests or carefully designed survey questions to assess different psychological constructs. Nowadays methods for automatic depression detection (ADD) have gained increasing interest since all the information available in social media, such as Twitter and Facebook, enables novel measurement based on language use. In (Schwartz & Ungar, 2015), it is highlighted that “language reveals who we are: our thoughts, feelings, belief, behaviors, and personalities”. In particular, quantitative analysis of the words and concepts expressed in texts have played an important role in ADD. For instance, in (De Choudhury et al., 2013b) the written content of tweets shared by subjects diagnosed with clinical depression are analyzed and an SVM classifier is trained to predict if a tweet is depression-indicative.

A pioneering work in this area (Stirman & Pennebaker, 2001) used the Linguistic Inquiry and Word Count (LIWC) (an automated word counting software) and showed that it is possible to characterize depression through natural language use. There, it is suggested that suicidal poets use more first-person pronouns (e.g., I, me, mine) and less first plural pronouns (e.g., we, ours) throughout their writing careers than non-suicidal poets. In a similar way, depressed students are observed to use first-person singular pronouns more often, more negative emotion words and fewer positive emotion words in their essays in comparison to students who have never suffered from this disease (Rude et al., 2004).

In the context of online environments such as social media, an ADD scenario that is gaining interest, as we will see in Subsection 2.2, is the one known as early depression detection (EDD). In EDD the task is, given users’ data stream, to detect possible depressive people as soon and accurate as possible.

Most automatic approaches to ADD have been based on standard machine learning algorithms (Guntuku et al., 2017; Tsugawa et al., 2015; Mariñelarena-Dondena et al., 2017). However, EDD poses really challenging aspects to the “standard” machine learning field. The same as with any other ERD task, we can identify at least three of these key aspects: incremental classification of sequential data, support for early classification and, explainability111Having the ability to explain its rationale..

To put the previous points in context, it is important to note that ERD is essentially a problem of analysis of sequential data. That is, unlike traditional supervised learning problems where learning and classification are done on “complete” objects, here classification (or both) must be done on “partial” objects which correspond to all the data sequentially read up to the present, from a (virtually infinite) data stream. Algorithms capable of dealing with this scenario are said to support incremental learning and/or incremental classification. In the present article we will focus on incremental classification since, so far, it is the only EDD scenario we have data to compare to, as we will see in Subsection 2.2. However, as we will see later, our approach is designed to incrementally work in both the learning and classification phases.

Classifiers supporting incremental classification of sequential data need to provide a suitable method to “remember” (or “summarize”) historical information read up to the present. The informativeness level of these partial models will be critical to the effectiveness of the classifier. In addition, these models also need to provide support to a key aspect of ERD: the decision of when (how soon) the system should stop reading from the input stream and classify it with acceptable accuracy. This aspect, that we have previously mentioned as the supporting for early classification, is basically a multi-objective decision problem that attempts to balance accurate and timely classifications.

Finally, explainability/interpretability is another important requirement for EDD. The same as with any other critical application in healthcare, finance, or national security, this is a domain that would be greatly benefited by models that not only make correct predictions but also facilitate understanding how those predictions are derived. Although interpretability and explanations have a long tradition in areas of AI like expert systems and argumentation

, they have gained renewed interest in modern applications due to the complexity and obscure nature of popular machine learning methods based on deep learning.

In the present work we propose a novel text classification model, called SS3, whose goal is to provide support for incremental classification, early classification and explainability in a unified, simple and effective way. We mainly focus on the first two aspects, measuring the SS3’s effectiveness on the first publicly-available EDD task, whereas regarding the latter we present very promising results showing how SS3 is able to visually explain its rationale.

The remaining sections are organized as follows. Section 2 presents those works that relate to ours. The proposed framework is introduced in Section 3, firstly introducing the general idea and then the technical/formal details. In Section 4 the proposed framework is compared to state-of-the-art methods used in a recent early depression detection task. Section 5 goes into details of the main contributions of our approach by analyzing quantitative and qualitative aspects of the proposed framework. Finally, Section 6 summarizes the main conclusions derived from this study and suggests possible future work.

2 Related Work

We organized the related works into 2 subsections. The first one describes works related to early classification in sequential data. The second subsection addresses the problem of early depression detection.

2.1 Analysis of Sequential Data: Early classification

The analysis of sequential data is a very active research area that addresses problems where data is processed naturally as sequences or can be better modeled that way, such as sentiment analysis, machine translation, video analytics, speech recognition, and time series processing. A scenario that is gaining increasing interest in the classification of sequential data is the one referred to as “early classification”, in which, the problem is to classify the data stream as early as possible without having a significant loss in terms of accuracy.

For instance, some works have addressed early text classification by using diverse techniques like modifications of Naive Bayes

(Escalante et al., 2016), profile-based representations (Escalante et al., 2017), and Multi-Resolution Concept Representations (López-Monroy et al., 2018). Those approaches have focused on quantifying prediction performance of the classifiers when using partial information in documents, that is, by considering how well they behave when incremental percentages of documents are provided to the classifier. However, those approaches do not have any mechanisms to decide when (how soon) the partial information read is sufficient to classify the input. Note that this is not a minor point since, for instance, in online scenarios in which users provide their data over time, setting a manually fixed percentage of the input to be read would not be possible222If we see the input (the user’s data) as a single document, this document would be virtually infinite! growing over time as the user generates new content.. This scenario, that we address here as the “real” early sequence classification problem, can be considered as a concrete multi-objective problem in which the challenge is to find a trade-off between the earliness and the accuracy of classification (Xing et al., 2010).

The reasons behind this requirement of “earliness” could be diverse. It could be necessary because the sequence length is not known in advance (e.g. online scenarios as suggested above) or, for example, if savings of some sort (e.g. computational savings) can be obtained by classifying the input in an early fashion. However, the most important (and interesting) cases are when the delay in that decision could also have negative or risky implications. This scenario, known as “early risk detection” have gained increasing interest in recent years with potential applications in rumor detection (Ma et al., 2015, 2016; Kwon et al., 2017), sexual predator detection and aggressive text identification (Escalante et al., 2017), depression detection (Losada et al., 2017; Losada & Crestani, 2016) or terrorism detection (Iskandar, 2017).

The key issue in real early sequence classification is that learned models usually do not provide guidance about how to decide the correct moment to stop reading a stream and classify it with reasonable accuracy. As far as we know, the approach presented in

(Dulac-Arnold et al., 2011)

is the first to address a (sequential) text classification task as a Markov decision process (MDP) with virtually three possible actions: read (the next sentence), classify

333In practice, this action is a collection of actions, one for each category c.

and stop. The implementation of this model relied on using Support Vector Machines (SVMs) which were trained to classify each possible action as “good” or “bad” based on the current state,

s. This state was represented by a feature vector, , holding information about the tf-idf representations of the current and previous sentences, and the categories assigned so far. Although the use of MDP is very appealing from a theoretical point of view, and we will consider it for future work, the model they proposed would not be suitable for risk tasks. The use of SVMs along with implies that the model is a black box, not only hiding the reasons for classifying the input but also the reasons behind its decision to stop early444Since this is enforced by the reward function which in turn depends, for each state s, on vector .. The same limitations could be found in more recent works (Yu et al., 2017, 2018; Shen et al., 2017)

also addressing the early sequence classification problem as a reinforcement learning problem but using Recurrent Neural Networks (RNNs).

Finally, (Loyola et al., 2018) considers the decision of “when to classify” as a problem to be learned on its own and trains two SVMs, one to make category predictions and the other to decide when to stop reading the stream. Nonetheless, the use of these two SVMs, again, hides the reasons behind both, the classification and the decision to stop early. Additionally, as we will see in Subsection 5.1, when using SVM to classify a document, incrementally, the classification process becomes costly and not scalable, since the document-term matrix has to be re-built from scratch every time new content is added.

2.2 Early Depression Detection

Even though multiple studies have attempted to predict or analyze depression using machine learning techniques, before (Losada & Crestani, 2016), no one had attempted to build a public dataset in which a large chronological collection of writings, leading to this disorder, were made available to the research community. This is mainly due to the fact that text is often extracted from social media sites, such as Twitter or Facebook, that do not allow redistribution. On the other hand, in the machine learning community, it is well known the importance of having publicly available datasets to foster research on a particular topic, in this case, predicting depression based on language use. That was the reason why the main goal in (Losada & Crestani, 2016) was to provide, to the best of our knowledge, the first public collection to study the relationship between depression and language usage by means of machine learning techniques. This work was important for ADD, not only for creating this publicly-available dataset for EDD experimentation but also because they proposed a measure (ERDE) that simultaneously evaluates the accuracy of the classifiers and the delay in making a prediction. It is worth mentioning that having a single measure combining these two aspects enabled this dataset to be used as a benchmark task in which different studies can be compared in terms of how “early-and-accurate” their models are.

Both tools, the dataset and the evaluation measure, were later used in the first pilot task of eRisk (Losada et al., 2017) in which 8 different research groups submitted a total of 30 contributions. Given that we will use this dataset for experimentation, evaluating and analyzing our results in comparison with the other 30 contributions, we will analyze them in more detail.

As observed in (Losada et al., 2017), among the 30 contributions submitted to the eRisk task, a wide range of different document representations and classification models were used. Regarding document representations some research groups used simple features like standard Bag of Words (Trotzek et al., 2017; Villegas et al., 2017; Farıas-Anzaldúa et al., 2017), bigrams and trigrams (Villegas et al., 2017; Almeida et al., 2017; Farıas-Anzaldúa et al., 2017)

, while others used more elaborated and domain-specific ones like lexicon-based features

555Such as emotion words from WordNet, sentiment words from Vader, and preexisting depression-related dictionaries.(Malam et al., 2017; Trotzek et al., 2017; Sadeque et al., 2017; Almeida et al., 2017), LIWIC features (Trotzek et al., 2017; Villegas et al., 2017), Part-of-Speech tags (Almeida et al., 2017), statistical features666Such as the average number of posts, the average number of words per post, post timestamps, etc.(Malam et al., 2017; Almeida et al., 2017; Farıas-Anzaldúa et al., 2017) or even hand-crafted features (Trotzek et al., 2017). Some other groups made use of more sophisticated features such as Latent Semantic Analysis (Trotzek et al., 2017), Concise Semantic Analysis (Villegas et al., 2017), Doc2Vec (Trotzek et al., 2017) or even graph-based representations (Villatoro-Tello et al., 2017). Regarding classification models, some groups used standard classifiers777

Such as Multinomial Naive Bayes(MNB), Logistic Regression (LOGREG), Support Vector Machine(SVM), Random Forest, Decision Trees, etc.

(Malam et al., 2017; Trotzek et al., 2017; Sadeque et al., 2017; Villegas et al., 2017; Almeida et al., 2017; Farıas-Anzaldúa et al., 2017) while others made use of more complex methods such as different types of Recurrent Neural Networks (Trotzek et al., 2017; Sadeque et al., 2017), graph-based models (Villatoro-Tello et al., 2017), or even combinations or ensemble of different classifiers (Trotzek et al., 2017; Sadeque et al., 2017; Villegas et al., 2017; Almeida et al., 2017).

Another interesting aspect of this evaluation task was the wide variety of mechanisms used to decide when to make each prediction. Most research groups (Malam et al., 2017; Trotzek et al., 2017; Sadeque et al., 2017; Villatoro-Tello et al., 2017; Villegas et al., 2017; Almeida et al., 2017) applied a simple policy in which, the same way as in (Losada & Crestani, 2016), a subject is classified as depressed when the classifier outputs a value greater than a fixed threshold. Some other groups (Farıas-Anzaldúa et al., 2017) applied no policy at all and no early classification was performed, i.e. their classifiers made their predictions only after seeing the entire subject’s history888Note that this is not a realistic approach, usually there is no such thing as a subject’s “last writing” in real life since subjects are able to create new writings over time.. It is worth mentioning that some groups (Malam et al., 2017; Trotzek et al., 2017; Villegas et al., 2017) added extra conditions to the given policy, for instance (Trotzek et al., 2017) used a list of manually-crafted rules of the form: “if output and the number of writings , then classify as positive”, “if output and the number of writings , then classify as non-depressed”, etc.

As will be highlighted and analyzed in more detail later, in Section 5, none of these 30 contributions, except those based on RNN and MNB, are suitable for naturally processing data sequences since, as mentioned earlier, standard classifiers such as Logistic Regression (LOGREG), SVM, (feedforward) Neural Network (NN), etc. are designed to work with complete and atomic document representations. Furthermore, no contributions paid attention to the explainability of their models since all of them (even those based on RNN and MNB) act as black boxes, which we consider a key aspect when dealing with risk applications in which real people are involved.

3 The SS3 Framework

At this point, it should be clear that any attempt to address ERD problems, in a realistic fashion, should take into account 3 key requirements: incremental classification, support for early classification, and explainability. Unfortunately, to the best of our knowledge, there is no text classifier able to support these three aspects in an integrated manner. In the remainder of this section, we will describe a new text classifier that we have created with the goal to achieve it.

Additionally, since we are introducing a new classification model, instead of going straight to the plain equations and algorithms, we have decided to include the general idea first and then, along with the equations, the ideas that led us to them. Thus, Subsection 3.1 shows the general operation of our framework with an informative and intuitive example highlighting how the above requirements could be met. Finally, Subsection 3.2 goes into some technical details about how the introduced ideas are actually implemented.

3.1 General Operation

In this subsection, we will give an intuitive and general idea of how our framework could address the above requirements with a simple and incremental classification model that we have called “SS3”, which stands for Sequential S3 (Smoothness, Significance, and Sanction) for reasons that will be clear later on. Our humble approach was intended to be used as a general framework for solving the document classification problem since it is flexible enough to be instantiated in several different manners, depending on the problem.

In the rest of this subsection, we will exemplify how the SS3 framework carries out the classification and training process and how the early classification and explainability aspects are addressed. The last subsection goes into more technical details and we will study how the local and global value of a term is actually computed. As we will see, these values are the basis of the entire classification process.

3.1.1 Classification Process

This subsection describes how classification is carried out. However, before we illustrate the overall process and for the sake of simplicity, we are going to assume there exist a function to value words in relation to categories —and whose formal definition will be the topic of subsubsection 3.2.2. To be more specific, takes a word and a category and outputs a number in the interval [0,1] representing the degree of with which is believed to exclusively belong to , for instance:

Where is read as “ has a global value of in ” or, alternatively, “the global value of in is ”. For example, is read as “apple has a global value of 0.8 in technology”. Additionally, we will define where , and denotes the set of all the categories. That is, when is only applied to a word it outputs a vector in which each component is the global value of that word for each category . For instance, following the above example, we have:

The vector will be called “confidence vector of ”; thus is the confidence vector of the word “the” in the example above. Note that each category is assigned to a fixed position in the output vector —in this example, the first position corresponds to , the second to , and so on.

Figure 1: Classification process for a hypothetical example document “Apple was developed with a Web Browser that didn’t support cookies. The company decided to remove it from the market”. In the first stage, this document is split into two sentences (for instance, by using the dot as a delimiter) and then each sentence is also split into single words. In the second stage, global values are computed for every word to generate the first set of confidence vectors. Then all of these word vectors are reduced by the operator to sentence vectors, and for the first and second sentence respectively. After that, these two sentence vectors are also reduced by another operator (, which in this case is the addition operator) to a single confidence vector for the entire document, . Finally, a policy is applied to this vector to make the classification —which in this example was to select technology, the category with the highest value, and also business because its value was “close enough” to technology’s.
Figure 2: subject 9579’s positive and negative confidence value variation over time. Time is measured in writings and it could be further expanded as more writings are created by the subject over time.

Now that the needed basic definitions and terminology have been introduced, we are ready to describe the overall classification process, which is illustrated with an example in Figure 1. Classification can be thought of as a 2-phase process. The first phase starts out by splitting the given input (usually a single document) into multiple blocks, then each block is in turn repeatedly divided into smaller units until words are reached. At the end of this phase, we have converted the previously “flat” input into a hierarchy of blocks. In practice, a document will be typically divided into paragraphs, paragraphs into sentences and sentences into words. Additionally, we will say that words are at level 0 in this hierarchy, sentences at level 1, paragraphs at level 2, and so on. In the second phase, the function is applied to each word to obtain the level 0 confidence vectors, which then are reduced by means of a summary operator to generate the next level’s confidence vectors. This reduction process is recursively propagated up to higher-level blocks until a single confidence vector is generated for the whole input. Finally, the actual classification is performed based on the values of this single confidence vector —some policy must be used, e.g. the category with the maximum value. Note that in the example shown in Figure 1, summary operators are denoted by , where denotes the level, to highlight the fact that each level (e.g. words, sentences, etc.) could have a different summary operator —for instance, could be addition,

maximum (i.e. max pooling),

average (i.e. mean pooling), etc. Moreover, any function of the form could be used as a summary operator.

function Classify() returns a set of category indexes
     input: , the sequence of one or more symbols
     local variables: , the document confidence vector
     
      Classify-At-Level(, )
     return a set of indexes selected by applying a policy, , to
end function
function Classify-At-Level(, ) returns a confidence vector
     input: , a sequence of symbols
     local variables: , a list of smaller blocks of the text
                                , block confidence vectors list
     
     if  == 0 then i.e. if is equal to a single symbol
          return Global-Value()
     else
           split into smaller units based on a level delimiter
           Map(Classify-At-Level, , )
          return Reduce(, )      
end function
Algorithm 1 General multi-label classification algorithm. is a constant storing the maximum hierarchy level when partitioning the document. For instance, it should be 3 when working with the paragraph-sentence-and-word partition. Global-Value is the function. Map applies Classify-At-Level(, ) to every in and returns a list of resultant vectors. Reduce reduces to a single vector by applying the operator cumulatively to the vectors in .

It is worth mentioning that with this simple mechanism it would be fairly straightforward to justify when needed, the reasons of the classification by using the values of confidence vectors in the hierarchy, as will be illustrated with a visual example at the end of Section 5. Additionally, the classification is also incremental as long as the summary operator for the highest level can be computed in an incremental fashion —which is the case for most common aggregation operations such as addition, multiplication, maximum or even average999In case of average it would be necessary to store, in addition to a vector with the sum of all previous confidence vectors, their number.. For instance, suppose that later on, a new sentence is appended to the example shown in Figure 1. Since is the addition, instead of processing the whole document again, we could update the already computed vector, , by adding it to the new sentence confidence vector— Note that this incremental classification, in which only the new sentence needs to be processed, would produce exactly the same result as if the process were applied to the whole document again each time.

Another important aspect of this incremental approach is that since this confidence vector is a value that “summarizes the past history”, keeping track of how this vector changes over time should allow us to derive simple and clear rules to decide when the system should make an early classification. As an example of this, suppose we need to classify a social media user (i.e. a subject) as depressed (positive) or non-depressed (negative) based on his/her writings. Let us assume that this user is the subject 9579, he/she is depressed, and that the change of each confidence vector component over time (measured in writings) is the one shown in Figure 2. We could make use of this “dynamic information” to apply certain policies to decide when to classify subjects as depressed. For example, one of such a policy would be “classify a subject as positive when the accumulated positive value becomes greater than the negative one” —in which case, note that our subject would be classified as depressed after reading his/her 66th writing. Another (more elaborated) policy could have taken into account how fast the positive value grows (the slope) in relation with the negative one, and if a given threshold was exceeded, classify subjects as depressed —in such case our subject could have been classified as depressed, for instance, after reading his/her 92nd writing. Note that we could also combine multiple policies as we will see in Section 5.

3.1.2 Training Process

This brief subsection describes the training process, which is trivial. Only a dictionary of term-frequency pairs is needed for each category. Then, during training, dictionaries are updated as new documents are processed —i.e. unseen terms are added and frequencies of already seen terms are updated.

Note that with this simple training method there is no need neither to store all documents nor to re-train from scratch every time a new training document is added, making the training incremental101010Even new categories could be dynamically added.. Additionally, there is no need to compute the document-term matrix because, during classification, can be dynamically computed based on the frequencies stored in the dictionaries —although, in case we are working in an offline fashion and to speed up classification, it is still possible to create the document-term matrix holding the value for each term. Finally, also note that training computation is very cheap since involves only updating term frequencies i.e only one addition operation is needed.

3.2 Formal/technical Description

This section presents more formally the general and intuitive description given in the previous section.

3.2.1 Classification and Training

In Algorithm 1 is shown the general multi-label classification algorithm which carries out the process illustrated earlier in subsubsection 3.1.1. Note that this algorithm can be massively parallelized since it naturally follows the Big Data programming model MapReduce (Dean & Ghemawat, 2008), giving the framework the capability of effectively processing very large volumes of data. In Algorithm 2 is shown the training process described earlier. Note that the line calling the Update-Global-Values function, which calculates and updates all global values, is only needed if we want to construct the document-term matrix to work in the standard batch-like way. Otherwise, it can be omitted since, during classification, can be dynamically computed based on the frequencies stored in the dictionaries. It is worth mentioning that this algorithm could be easily parallelized by following the MapReduce model as well —for instance, all training documents could be split into batches, then frequencies locally calculated within each batch, and finally, all these local frequencies summed up to obtain the total frequencies.

procedure Learn-From-Dataset()
     Input: , a list of labeled documents
     
     for each in  do
          Learn-New-Document(.TEXT, .CATEGORY)      
     Update-Global-Values() this line is optional
end procedure
procedure Learn-New-Document(, )
     input: , the sequence of words in the document
                 , the category the document belongs to
     
     for each in  do
          if  .DICTIONARY then
               add to .DICTIONARY           
          .DICTIONARY[] .DICTIONARY[]      
end procedure
Algorithm 2 Learning Algorithm.

3.2.2 Local and Global Value of a Word

Our approach to calculating , as we will see later, tries to overcome some problems arising from the valuation of words only based on local information to a category. This is carried out by, firstly, computing a word local value () for every category, and secondly, combining them to obtain the global value of the word in relation to all the categories.

More precisely, the local value should be a function such that i.e. the local value of in

should be proportional to the probability of

occurring, given the category . Therefore, will be defined by:

(1)

Instead of simply having , we have chosen to divide it by the probability of the most frequent word in . This produces two positive effects: (a) is normalized and the most probable word will have a value of 1, and more importantly, (b) words are now valued in relation to how close they are to the most probable one. Therefore, no matter the category, all stop words (such as “the“, “of“, “or”, etc.) will always have a value very close, or equal, to 1.

Note that this allows us to compare words across different categories since their values are all normalized in relation to stop words, which should have a similar frequency across all the categories111111Note that we are assuming here that we are working with textual information in which there exist highly frequent elements that naturally have similar frequency across all categories (e.g. such as stop words).. However, our current definition of implicitly assumes that the proportionality is direct, which is not always true so we will define more generally as follows:

Which, after estimating the probability, , by an analytical Maximum Likelihood Estimation(MLE) derivation, leads us to the actual definition:

(2)

Where denotes the frequency of in and the maximum frequency seen in . The value is the first hyper-parameter of our model, called “smoothness”, and whose role is twofold:

  • Control how fast grows the local value of a word in relation to how close it is to the most probable one; e.g. when , grows linearly proportional to .

  • Control the smoothness of the distribution of words which otherwise, by the empirical Zipf’s law (Zipf, 1949; Powers, 1998), will have a very small group of highly frequent words overshadowing important ones.

Figure 3: word-local value diagram for 5 different values of : 1, 0.8, 0.5, 0.3 and 0.1. The abscissa represents individual words arranged in order of frequency. Note that when , (red line) matches the shape of the raw frequency (the actual word distribution), however, as decreases, the curve becomes smoother; reducing the gap between the highest and the lowest values.

These two items are illustrated with an example in Figure 3 —a good value for should be around 0.5, which would be approximately equivalent to taking the square root of the Equation 1.

Now that we are able to compute word local values, we are going to define its global value based on them, as follows:

(3)

Where and are functions of the form . As we will see, the former decreases in relation to the global significance of , and the latter sanctions it, in relation to the number of categories for which is significant. Additionally, the values referred to as “significance” and “sanction” respectively, are the other two hyper-parameters of our model.

In order to represent the significance of a word, , with respect to a category, , should be a function such that: (a) it outputs a value close to 1 when is significantly greater than , for most other categories ; and (b) it outputs a value close to 0 when all are close to each other, for all . For instance, probably will be a similarly large value for all categories , whereas probably will be greater than most , for other categories; hence should be close to 0 and

close to 1. In general, we could model this behavior by using any sigmoid function, as follows:

Such that:

  1. if ; and

  2. if .

Where i.e. the set of all local values of ; denotes the median of ; i.e. the Median Absolute Deviation of . Additionally, note that the hyper-parameter 121212 if is approximately close to 1.4826, then is approximately equal to the standard deviation of , thus perhaps setting would be a good value, as long as

has a normal distribution

controls how far the local value must deviate from the median to be considered significant i.e. the closer to , the closer the to 1, and therefore, also the closer to 1 —which is the desired behavior.

In particular, we have decided to use as the function, hence is defined by:

(4)

Finally, we need to define , the sanction function, which will proportionally decrease the global value of , in relation to the number of categories for which is significant. Hence should be a function such that: (a) when is significant (i.e. ) to only one category , should be equal to 1; (b) the greater the number of categories is significant to, the lower the value of . Therefore, we have defined by:

(5)

Where denotes the number of categories and,

i.e. is equal to the summation of for all categories in except . Note that, for instance, when extreme cases are met, Equation 5 behaves properly; namely, when is significant to almost all categories, , and thus ; and when is significant to only one category, , , and thus .

The hyper-parameter 131313 setting probably is a good starting point, although we can adjust this value in relation to how overlapped categories are. controls how severe the sanction is, in proportion to the number of significant categories.

To conclude this section, let us introduce a simple example to illustrate how the global value is a percentage of its local value given by its significance () and sanction (). Suppose we have the following three categories for , , and respectively, then:

Train Test
Depressed Control Depressed Control
No. of subjects 83 403 52 349
No. of submissions 30,851 264,172 18,706 217,665
Avg. No. of submissions per subject 371.7 655.5 359.7 623.7
Avg. No. of days from first to last submission 572.7 626.6 608.3 623.2
Avg. No. of words per submission 27.6 21.3 26.9 22.5
Table 1: Summary of the task data
  • For stopwords, like ‘the’, we would have, regardless of the category , something like:

    While the local value of ‘the’ is 0.92, its final global value turned out to be 0.04 (5% of its local value). This is due to the fact that is similarly high for all categories, and by definition, the significance function should be close to 0 —in this case, .

  • For a word that is mainly significant to a single category, in this case, ‘bread’ to food, we would have something like:

       

    The global value is almost identical to its local value (about 94%). This is due to the word being significant () only to food ().

  • For a word that is significant to more than a single category, we would have something like:

       

    In this case, the global value ended up being about 51% of its local value. Note that while ‘apple’ is quite significant to (), it must also be significant to some of the other categories, at least to a certain degree, because it is being moderately sanctioned ().

It is interesting to notice that Multinomial Naive Bayes can be seen as one possible instance of the SS3 framework. Namely, when and , for all . However, this instance of SS3 would not effectively fulfill our goals. Since we have by definition of , i.e. the probabilities of all words must sum up to 1, and since the number of words per category is usually very large, is usually very small and very similar (due to the effect of using ) for all the words, . Additionally, important words would be overshadowed by unimportant (or less important), but highly frequent, words such as stop words. This disfavors both the power to describe what words helped to make the decision and usually the performance as well—other types of models, such as SVM, frequently outperform MNB. The main issue with MNB arises from the fact that terms are valued simply and solely by their local raw frequency. In short, that is basically the problem that computation tries to overcome.

4 Experimental Evaluation

In this section, we cover the experimental analysis of SS3, the proposed approach. The next subsection briefly describes the pilot task and the dataset used to train and test the classifiers. In Subsection 4.2 we will introduce the time-aware metric used to evaluate the effectiveness of the classifiers, in relation to the time taken to make the decision. Finally, Subsection 4.4 describes the different types of experiments carried out and the obtained results.

4.1 Dataset and Pilot Task

Experiments were conducted on the CLEF 2017141414http://clef2017.clef-initiative.eu eRisk pilot task151515http://early.irlab.org/2017/task.html, on early risk detection of depression. This pilot task focused on sequentially processing the content posted by users on Reddit161616https://www.reddit.com. The dataset used in this task, which was initially introduced and described in (Losada & Crestani, 2016), is a collection of writings (submissions) posted by users; here users will also be referred to as “subjects”. There are two categories of subjects in the dataset, depressed and control (non-depressed). Additionally, in order to compare the results among the different participants, the entire dataset was split into two sets: a training set and a test set. The details of the dataset are presented in Table 1. Note that the dataset is highly unbalanced, namely, only 17% of the subjects in the training set are labeled as depressed, and 12.9% in the test set.

It is important to note that, as it is described in Section 2.2 of (Losada & Crestani, 2016), to construct the depression group, authors first collected users by doing specific searches on Reddit (e.g. “I was diagnosed with depression”) to obtain self-expressions of depression diagnoses, and then they manually reviewed the matched posts to verify that they were really genuine. According to the authors, this manual review was strict, expressions like “I have depression”, “I think I have depression”, or “I am depressed” did not qualify as explicit expressions of a diagnosis. They only included a user into the depression group when there was a clear and explicit mention of a diagnosis (e.g., “In 2013, I was diagnosed with depression”, “After struggling with depression for many years, yesterday I was diagnosed”). That introduces the possibility of having some noise in both categories of the collected data, therefore, from now on, when we refer to “depressed” it should be interpreted as “possibly diagnosed with depression”.

In this pilot task, classifiers must decide, as early as possible, whether each user is depressed or not based on his/her writings. In order to accomplish this, during the test stage and in accordance with the pilot task definition, the subject’s writings were divided into 10 chunks —thus each chunk contained 10% of the user’s history. Then, classifiers were given the user’s history, one chunk at a time, and after each chunk submission, the classifiers were asked to decide whether the subject was depressed, not depressed or that more chunks need to be read.

4.2 Evaluation Metric

Standard classification measures such as -measure (), Precision () and Recall () are time-unaware. For that reason, in the pilot task, the measure proposed in (Losada & Crestani, 2016) was also used, called Early Risk Detection Error (ERDE) measure, which is defined by:

Where the sigmoid latency cost function, is defined by:

The delay is measured by counting the number () of distinct textual items seen before making the binary decision () which could be positive () or negative (). The parameter serves as the “deadline” for decision making, i.e. if a correct positive decision is made in time , it will be taken by as if it were incorrect (false positive). Additionally, in the pilot task, it was also set and . Note that was calculated by the number of depressed subjects divided by the total subjects in the test set.

4.3 Implementation details

SS3 was manually coded in Python 2.7 using only built-in functions and data structures, e.g. a dict to store the category’s dictionary or map and reduce functions to (locally) simulate a MapReduce pattern. Since this paper focuses on early detection, not computing nor large-scale classification, we did not perform a real MapReduce implementation. Moreover, since in subsubsection 4.4.2 we are also reporting the computation time taken by all the other classifiers and all of them must share the same type of implementation, implementing a MapReduce version would not have been fair. Thus, all these other models were also implemented in Python 2.7, using the sklearn library171717https://scikit-learn.org/, version 0.17. Vectorization was done with the TfidfVectorizer class, with the standard English stop words list. Additionally, terms having a document frequency lower than 20 were ignored. Finally, classifiers were coded using their corresponding sklearn built-in classes, e.g. LogisticRegression, KNeighborsClassifier, MultinomialNB, etc.

4.4 Experiments and Results

This subsection describes the experimental work, which was divided into two different scenarios. In the first one, we performed experiments in accordance with the original eRisk pilot task definition, using the described chunks. However, since this definition assumes, by using chunks, that the total number of user’s writings is known in advance181818Which is not true when working with a dynamic environment, such as Social Media., we decided to also consider a second type of experiment, simulating a more realistic scenario, in which user’s history was processed as a stream, one writing at a time.

4.4.1 Scenario 1 - original setting, incremental chunk-by-chunk classification

Since there were only two, barely overlapped, categories, we decided to start by fixing the SS3 framework’s and hyper-parameters to 1. In fact, we also carried out some tests with other values that improved the precision (or recall) but worsened the ERDE measure. Model selection was done by 4-fold cross-validation on the training data minimizing the ERDE measure while applying a grid search on the hyper-parameter. This grid search was carried out at three different levels of precision. In the first level, took values from , with . Once the best value of was found, let us say , we started a second-level grid search in which took values from . Finally, a third search was applied around the new best value, , where was set to , also with .

NLPISA 15.59% NLPISA 15.59%
CHEPEA 14.75% LyRE 13.74%
GPLC 14.06% CHEPEA 12.26%
LyRE 13.74% GPLC 12.14%
UNSLA 13.66% UQAMD 11.98%
UQAMD 13.23% UArizonaD 10.23%
UArizonaB 13.07% FHDO-BCSGA 9.69%
FHDO-BCSGB 12.70% UNSLA 9.68%
SS3 12.70% SS3 8.12%
SS3 12.60% SS3 7.72%
Table 2: Results on the test set in accordance with the original eRisk pilot task (using chunks).
Time
LOGREG 11.7% 10.9% 9.4% 7.5% 6.3% 5.8% 0.53 0.41 0.75 71.3m
SVM 12.0% 10.9% 9.1% 7.2% 6.1% 6.0% 0.55 0.47 0.69 73.9m
MNB 10.6% 10.4% 10.4% 10.4% 10.1% 10.1% 0.24 0.14 1 17.5m
KNN 12.6% 10.4% 8.5% 8.2% 7.9% 7.7% 0.35 0.22 0.90 100.6m
SS3 11.0% 9.8% 8.0% 7.2% 5.8% 5.5% 0.54 0.42 0.77 3.7m
SS3 11.1% 9.9% 8.1% 7.3% 5.9% 5.6% 0.55 0.42 0.81 3.7m
Table 3: Results on the test set using a more realistic scenario in which writings are processed sequentially.

After the grid search, using the hyper-parameter configuration with the lowest value, and , we finally trained our model with the whole training set and performed the classification of the subjects from the test set. Additionally, the classification of the test set was carried out applying two different classification policies, similarly to what was intuitively introduced in subsubsection 3.1.1: the first one classified a subject as positive if the accumulated positive confidence value becomes greater than the negative one; the second one, denoted by SS3, was more comprehensive and classified a subject as positive when the first case was met, or when the change of the positive slope was, at least, four times greater than the negative one, i.e. the positive value increased at least 4 times faster191919Those readers interested in the implementation details for this scenario, the classification algorithm is given in the next section.. The obtained results are shown in Table 2 and are compared against each institution’s best ERDE and ERDE among all the 30 submissions 202020The full list is available at early.irlab.org/2017/task.html.. It can be seen that SS3 obtained the best ERDE (12.60%) while SS3 the best ERDE (7.72%). Additionally, standard timeless measures were , and for SS3 and , and for SS3. SS3 had the 7th best value (0.54) out of the other 30 contributions and was quite above the average (), which is not bad taking into account that hyper-parameters were selected with the aim of minimizing ERDE, not the measure.

4.4.2 Scenario 2 - modified setting, incremental post-by-post classification

As said earlier, each chunk contained 10% of the subject’s writing history, a value that for some subjects could be just a single post while for others hundreds or even thousands of them. Furthermore, the use of chunks assumes we know in advance all subject’s posts, which is not the case in real life scenarios, in which posts are created over time. Therefore, in this new (more realistic) scenario, subjects were processed one writing (post) at the time (in a stream-like way) and not using chunks.

Given that we do not have previous results available from other participants under this new scenario, for comparison, we had to perform experiments not only with SS3 but also with other standard classifiers Logistic Regression (LOGREG), Support Vector Machine (SVM), Multinomial Naive Bayes (MNB) and -Nearest Neighbors (-NN). For all these standard methods, the policy to classify a stream as positive (depressed) was the same as the most effective policy used in (Losada & Crestani, 2016), that is, classify a subject as depressed when the classifier outputs a confidence value above 0.5.

As it will be discussed in the next section, when classifying a subject in a streaming-like way, the execution cost of each classifier for each subject is with respect to the total number of subject’s writings, —except for MNB and SS3 which is . In accordance to this, if we had used cross-fold validation to find the best parameters of each classifier to minimize the ERDE measure it would have taken too much time, more than one hour for every single fold and for every single possible combination of parameter values (i.e. weeks or even months in total). Therefore, parameters were selected with the aim of optimizing, as usual, the standard F measure instead of ERDE.

Since the dataset was highly unbalanced we optimized the penalty parameter, , and the class weight parameter for SVM and LOGREG; for MNB only the class weight was varied, while for NN the parameter. As in (Losada & Crestani, 2016), we set the majority class (non-depressed) weight to and the minority class (depressed) weight to . Also, following standard practice, we applied a grid search on the tuning parameters, with exponentially growing sequences ( and ) for SVM, LOGREG, and MNB and for the case of -NN, took values sequentially from 1 to 20.

Model selection was also done by 4-fold cross-validation on the training data optimizing the F measure with respect to the minority class. The parameter configuration with the highest F for each classifier was the following: and for SVM (with L2 regularization); and for LOGREG (with L1 regularization); for MNB; for KNN; and , , and for SS3.

We trained the classifiers using the optimized parameters with the whole training dataset and then, the incremental post-by-post classification was analyzed. Now, writings are processed sequentially, that is, early classification evaluation was carried out, as mentioned, one writing at a time. Additionally, we decided to compute the ERDE measure not only for and 50 but also for , 30, 75 and 100 in order to have a wider view of how efficient classifiers are with respect to how early they classify subjects. The obtained results are shown in Table 3. There, in the last column, it is also included the time that each classifier required to classify all the subjects in the test set. As we can see, SS3 obtained the best and ERDE values for all the considered values except for ERDE. On the other hand, SS3 has a precision() value (0.42) relatively similar to the best one (0.47), obtained by SVM. However, as we will discuss further in the next section, SS3 has a more efficient computation time in comparison with the remaining algorithms. For instance, it took SVM more than one hour (73.9 min) to complete the classification of the test set while it took SS3 a small fraction of it (roughly 5.3%) to carry out the same task

Figure 4: global value (green) in relation to the local value (orange) for the “depressed” category. The abscissa represents individual words arranged in order of frequency. Note that the zone in which stop words are located (close to 0 in the abscissa) the local value is very high (since they are highly frequent words) but the global value is almost 0, which is the desired behavior.
SS3 0.61 0.63 0.60
LOGREG 0.59 0.56 0.63
SVM 0.55 0.5 0.62
MNB 0.39 0.25 0.96
KNN 0.54 0.5 0.58
Table 4: Results on the test set using all subject’s history as a single document, i.e. timeless classification.

It is interesting to notice that we also performed classification of subjects on the test set using all subject’s writings as if it were a single document (i.e. classical timeless classification); results are shown in Table 4. SS3 obtained the highest values for (0.61) and Precision (0.63) measures, possibly due to the flexibility that is given by its three hyper-parameters to discover important and discriminative terms. These results provide strong evidence that SS3 also achieves competitive performance when is trained and tested to optimize standard (non-temporal) evaluation measures. Note that the best configuration of MNB obtained after the model selection stage, aiming at overcoming the unbalanced dataset problem, tends to classify all subjects as depressed, that is the reason MNB had a Recall() close to 1 but a really poor precision (0.25).

5 Analysis and Discussion

(a) Sized by global value
(b) Sized by raw frequency
Figure 5: Top-100 words selected by global value (GV) from the model trained for the eRisk Pilot Task using chunks. The font size is related to (a) GV and (b) row frequency. The green color indicates the words selected only by GV whereas the orange color indicates the words also selected by the traditional Information Gain(IG).

From the experimental study of Subsection 4.4, we can conclude that the proposed framework appears to show remarkable performance in incremental classification for early depression detection tasks. It obtained the best results for the time-aware error measures specifically designed to combine classifier’s accuracy and penalization in late classifications. In that context, it is important to notice that SS3 showed to be more effective than the others, more elaborated, approaches participating in the eRisk task, such as those based on Recurrent Neural Networks (like LSTM, GRU, etc.), graph-based models, ensembles of different classifiers, etc.

Regarding the support that SS3 provides for early classification we can say that, even though the rules we used are very simple, they are more effective than more elaborated and complex mechanisms used in the pilot task. For instance, some mechanisms to stop reading and classifying a subject included complex decision mechanisms based on specific rules for different chunks (Villegas et al., 2017). These rules take into account the decisions of different classifiers, the probability that each classifier assigned to its prediction, “white lists” containing the words with the highest information gain, and other sources of information. Another approach that showed a good performance relied on hand-crafted rules specifically designed for this problem (Trotzek et al., 2017), of the form: “if output and number of writings , then classify as positive”, “if output and the number of writings , then classify as non-depressed”, etc.

As we can see, the two types of decision rules for early classification we used are quite simpler than those mechanisms and more importantly, they are problem-independent yet, interestingly, obtained better results in practice. It is true that more elaborated methods that simultaneously learn the classification model and the policy to stop reading could have been used, such as in (Dulac-Arnold et al., 2011; Yu et al., 2017). However, for the moment it is clear that this very simple approach is effective enough to outperform the remainder methods, leaving for future work the use of more elaborated approaches.

In order to get a better understanding of the rationale behind the good behavior of our framework, it is important to go into more details on the mechanisms used to weight words. In Figure 4 we can empirically corroborate that the global value correctly captures the significance and discriminating power of words since, as it is well known, mid-frequency words in the distribution have both high significance and high discriminating power212121As firstly hypothesized by Luhn (1958)., and global values for these mid-frequency words are the highest.

This discriminating power of words can also be appreciated from a more qualitative point of view in the word-clouds of the top-100 selected words by global value shown in Figure 5. From this figure it is possible to observe that the most frequent terms, i.e. the biggest ones on (b), were also selected by (orange colored), however, most of the terms selected only by (green colored) are not so frequent, but highly discriminative. To highlight this point, note that GV included very general words (depression, suicidal, psychiatrist, anxiety, etc.) but, unlike IG, it also included many specific words. For instance: not only the word antidepressant was included but also well-known antidepressants such as Prozac and Zoloft222222Not included here, but also at rank 125 Lexapro.; not only general terms related to medicine or disorders (such as medication, meds, insomnia, panic, mania, etc.) but also more specific ones such as OCD (Obsessive compulsive disorder), PCOS (Polycystic ovary syndrome), EDS (Ehlers-Danlos syndrome), CBT (Cognitive behavioral therapy), serotonin, melatonin, Xanax, KP (Kaiser Permanente, a healthcare company), etc.; not only general words linked to diet, body or appearance (such as unattractive, skincare, makeup, acne, etc.) but also pimples, swelling, Keto (Ketogenic diet for depression), Stridex (an American acne treatment and prevention medicine), AHA (Alpha hydroxy acids), BHA (Beta hydroxy acid), Moisturizer, NYX (a cosmetics company), Neutrogena (an American brand of skin care, hair care and cosmetics), etc. It is also worth mentioning that this is a vital and very relevant aspect: if we value these specific words, as is usual, only by their local probability 232323Which is the case, for instance, with Multinomial Naive Bayes. (or frequency), as shown in (b), they will always have almost “no value” since, naturally, their probability of occurrence is extremely small compared to more general words (and even worst against stopword-like terms). However, for instance, we intuitively know the phrase “I’m taking antidepressants” has almost the same value as “I’m taking Prozac” when it comes to deciding whether the subject is depressed or not. Fortunately, this is correctly captured by the global value242424Note that, unlike in (b), the size of “Antidepressants” and “Prozac” in (a), at the bottom and in the middle of it respectively, are quite similar and not so different from the size of “Depression”. since it was created to value terms, globally, according to how discriminative and relevant they are to each category.

Additionally, in order to better understand the good obtained results, another important aspect to analyze is how the early classification was actually carried out using the simplest policy to decide when to positively classify subjects. In Figure 6 are shown four subjects from the test set that illustrate four types of common classification behaviors we have detected:

(a) subject 265 (labeled as non-depressed)
(b) subject 9306 (labeled as depressed)
(c) subject 9579 (labeled as depressed)
(d) subject 1914 (labeled as depressed)
Figure 6: Accumulated confidence values over time (chunk by chunk). Four typical behaviors are shown, represented by these four subjects from the test set.
  • from the first chunk on, the cumulative confidence value of one of the classes (negative in this case) stays above and always growing faster the other one. In this example, correctly, it was not possible to classify this subject as depressed after reading all its chunks.

  • similar to the previous case, the value of one class (positive) stays always on top of the other one, but this time they both grow at a similar pace. The subject was correctly classified as depressed.

  • the accumulated negative confidence value starts being greater than the positive one, but as more chunks are read (specifically starting after reading the 3rd chunk), the positive value starts and stays growing until it exceeds the other one. In this case, this subject is classified as depressed after reading the 6th chunk.

  • this example has a behavior similar to the previous one, however, the positive value, despite getting very close at chunk 8, never exceeds the negative one, which leads to the subject 1914 being misclassified as negative.

With the aim of avoiding cases of misclassification like in (d), we decided to implement the second classifier, SS3, whose policy also takes into account the changes in both slopes. As it can be seen from Algorithm 3 and as mentioned before, SS3 additionally classifies a subject as positive if the positive slope changes, at least, four times faster than the other one. In Figure 7 is shown again the subject 1914, this time including information about the changes in the slopes. Note that this subject was previously misclassified as not depressed because the accumulated positive value never exceeded the negative one, but by adding this new extra policy, this time it is correctly classified as positive after reading the 8th chunk252525Note the peek in the blue dotted line pointing out that, at this point, the positive value has grown around 11 times faster than the negative one..

Figure 7: subject 1914 (labeled as depressed). The ratio between the positive and the negative slope change () is shown in blue (dotted line). This ratio was used by the policy.
function Classify-Subject()
     input: , a subject’s sequence of chunks
     local variables: , the subject confidence vector
                                , a chunk confidence vector
      where (negative, positive)
     for each in  do
          Classify-Chunk()
          
          if  or  then
               return subject is depressed
          else
               more evidence is needed                
     return
end function
Algorithm 3 SS3 classification algorithm. Where Classify-Chunk() is actually Classify-At-Level(, 4).
(a) subject 834 (non-depressed).
(b) subject 1345 (depressed).
(c) subject 2673 (depressed).
(d) subject 748 (non-depressed).
Figure 8: Accumulated confidence values over time (writing by writing). Four common error cases, represented by these four subjects.

From the previous analysis, it is clear that useful information can be obtained from the study of those cases where our approach was not able to correctly predict a class. With this goal in mind, we also carried out an error analysis and identified four common error cases which could be divided into two groups: those that arise from bad labeling of the test set and those that arise from bad classifier performance. In Figure 8 we exemplify each case with one subject from the test set, described in more detail below:

  • the subject is misclassified as positive since the positive accumulated exceeded the negative one. When we manually analyzed cases like these we often found out that the classifier was correctly accumulating positive evidence since the users were, in fact, apparently depressed.

  • in cases like this one, subjects were misclassified as negative, since SS3 did not accumulate any (or very little) positive evidence. Manually analyzing the writings, we often could not find any positive evidence either, since subjects were talking about topics not related to depression (sports, music, etc.).

  • there were cases like this subject, in which SS3 failed to predict “depression” due to the accumulated positive value not being able to exceed the negative one even although, in some cases, it was able to get very close. Note that the positive value gets really close to the negative one at around the 100th writing262626Perhaps a finer tuning of hyper-parameters would overcome this problem..

  • this type of error occurred only due to the addition of the slope ratio policy. In some cases, SS3 misclassified subjects as positive because, while it was true that the positive value changed at least 4 times more rapidly than the negative, the condition was mainly true only due to the negative change being very small. For instance, if the change of the negative confidence value was 0.01, a really small positive change of at least 0.04 would be enough to trigger the “classify as positive” decision272727Perhaps this could be fixed if we also request the positive or negative change to be, at least, bigger than a fixed constant (let us say 1) before applying the policy.. This problem can be detected in this subject by seeing the blue dotted peek at around the 60th writing, indicating that “the positive slope changed around five times faster than the negative” there, and therefore misclassifying the subject as positive. However, note that this positive change was in fact really small (less than 1).

Finally, we believe it is appropriate to highlight another of the highly desirable aspects of our Framework: its descriptive capacity. As mentioned previously, most standard and state-of-the-art classifiers act as black boxes (i.e. classification process is not self-explainable) and therefore humans are not able to naturally interpret the reasons behind the classification. However, this is a vital aspect, especially when the task involves sensitive or risky decisions in which, usually, people are involved. In Figure 9 is shown an example of a piece of what could be a visual description of the classification process for the subject 9579282828Note that this is the same subject who was previously used in the example shown in Figure 2, in subsubsection 3.1.1. The interested readers could see the relation between the green/positive curve there and the color intensity of each writing shown in (a).. In this example, we show in (a) a painted piece of the subject’s writings history that the system users could use to identify which were the writings involved, and to what degree, in the decision making (classification) process. if the user wanted to further analyze, let us say, the writing 60 in more details, the same process could be applied at two different lower levels, as shown in (b) and (c) for sentences and words respectively. It is worth mentioning that since this “visual explanation” process can be easily automated we have developed an online live demo, specially built for this purpose, available at http://tworld.io/ss3. There, users can try out a version of SS3 trained using tweets for topic classification that, along with the classification result, gives a visual explanation.


  Writing 54 I’m going to agree with everyone else and say you definitely need a lawyer. Get in touch with the[…]
  Writing 55 You don’t mention what the fertility issue is (and you don’t have to) but his feelings may stem fr[…]
 
  Writing 59 Thankfully I was able to realize that I was in a bad place and get help. My sister has been awesom[…]
  Writing 60 I have been seeing a therapist which I think is helping a little. Fact is, I was feeling really depressed[…]
  Writing 61 My Wife Wants a Divorce . This will be long, sorry in advance. My wife told me shortly after the[…]
  Writing 62 the Earth Arena coming up I have: Zelnite x2 Dilma Ophelia For my last spot, should I use Miku […]
 

(a) Subject 9579’s history - writing level

I have been seeing a therapist which I think is helping a little. Fact is, I was feeling really depressed and wanting to kill myself. I spent basically all of Feb in the hospital[…]

(b) Writing 60 - sentence level

I have been seeing a therapist which I think is helping a little. Fact is, I was feeling really depressed and wanting to kill myself. I spent basically all of Feb in the hospital[…]

(c) Writing 60 - word level
Figure 9: This figure shows how a visual description of the decision process could be given in this depression detection task. As we mentioned before, our framework allows us to analyze the reasons behind its classification decision, at different levels: (a) writings, (b) sentences and (c) words, etc. Each one of these blocks is painted proportionally to the real positive confidence values we obtained after the experiments.

5.1 Computational Complexity

As shown in Table 3, SS3 is an efficient method in computation time. This is due to the fact that, unlike most state-of-the-art classifiers, SS3 does not necessarily “see” the input as an atomic -dimensional vector (i.e. a document vector) that must be computed entirely before making a prediction. In consequence, when working with a sequence of documents, for instance, SVM, LOGREG, and KNN must re-compute the input vector each time new content is added to the sequence.

Formally, if is the length of the sequence, when working with classifiers like SS3 or MNB, the cost of the early classification algorithm for every subject, according to the number of processed documents, is equal to (since each document needs to be processed only once). On the other hand, for classifiers like SVM, LOGREG, KNN or (non-recurrent) Neural Networks, this cost is equal to (since the first document needs to be processed times, the second , the third , and so on). Therefore, using the Big O Notation, we have MNB and SS3 belonging to whereas the other three classifiers belong to . Finally, it is worth mentioning that, as pointed out in the previous section, this cost affects not only the classification stage but it also severally affects previous stages such as hyper-parameter and model optimization since they need to classify the validation set several times (paying the cost every time).

It is worth noting that the difference in terms of space complexity is also very significant. For classifiers supporting incremental classification, like SS3 or MNB, only a small vector needs to be stored for each user. For instance, when using SS3 we only need to store the confidence vector292929In case of ADD, a 2-dimensional vector. of every user and then simply update it as more content is created. However, when working with classifiers not supporting incremental classification, for every user we need to store either all her/his writings to build the document-term matrix or the already computed document-term matrix to update it as new content is added. Note that storing either all the documents or a document-term matrix, where is the number of documents and the vocabulary size, takes up much more space than a small 2-dimensional vector.

Finally, since online social media platforms typically have thousands or millions of users, paying a quadratic cost to process each one while having to store either all the writings or a large matrix for every user makes classifiers not supporting incremental classification not scalable.

5.2 Implications and Clinical Considerations

As stated in (Guntuku et al., 2017): “Automatic detection methods may help to identify depressed or otherwise at-risk individuals through the large-scale passive monitoring of social media, and in the future may complement existing screening procedures”. In that context, our proposal is a potential tool with which systems could be developed in the future for large-scale passive monitoring of social media to help to detect early traces of depression by analyzing users’ linguistic patterns, for instance, filtering users and presenting possible candidates, along with rich and interactive visual information, for mental health professionals to manually analyze them. The “large-scale passive monitoring” aspect would be supported by the incremental303030Only one small vector, the confidence vector, needs to be stored for each user. and highly parallelized nature of SS3 while the “rich and interactive visual information” one by its white-box nature.

It is clear that this work does not pursue a goal of autonomous diagnosis but rather being a complementary tool to other well-established methods of mental health. As a matter of fact, several ethical and legal questions about data ownership and protection, and how to effectively integrate this type of approaches into systems of care are still open research problems (Guntuku et al., 2017).

The dataset used in this task had the advantage of being publicly available and played an important role in determining how the use of language is related to the EDD problem. However, it exhibits some limitations from a methodological/clinical point of view. Beyond the potential “noise” introduced by the method to assess the “depressed”/“non-depressed” condition, it lacks some extra information that could be very valuable to the EDD problem. For instance, in other datasets, such as the one used in (De Choudhury et al., 2013a) for the detection of depression in social media (Twitter in this case), in addition to the text of the interactions (tweets), it was also available other extremely valuable information for this type of pathology such as the scores obtained in different depression tests (CES-D and BDI), information about the user’s network of contacts and interaction behavior (such as an insomnia index and posting patterns), among others.

It is clear that if we had had this additional information available, it would have been possible to obtain, among others, a more reliable assessment of depressive people, their severity levels of depression, and also to detect some mediating factors like environmental changes that could not be directly available in the users’ posts. Besides, this information could also be used to train other models and integrate their predictions with the ones obtained only using textual information by, for instance, using some late-fusion ensemble approach.

Finally, although the clinical interpretability of the results was only addressed collaterally in our work, it is important to clarify some important points. First of all, it was interesting to observe that most of the top-100 words relevant to the “depression” class, identified by our model, perfectly fit the usual themes identified in other, more clinical, studies on depression (De Choudhury et al., 2013a) such as “symptoms”, “disclosure”, “treatment” and, “relationships-life”. Interestingly, we also noticed what might be a new group of words, those linked to multiplayer online video games313131As it can be seen in Figure 5 from words linked to the popular video game “Dota” like “Dota”, “MMR”, “Wards”, “Mana”, “Rune”, “Gank”, “Heroes” and “Viper”., however, a reliable analysis of this requires a multidisciplinary work with mental health professionals that is out of the scope of the present work. On the other hand, graphs of accumulated confidence values over time (chunk-by-chunk or writing-by-writing) shown in Figures 6, 7 and 8 are intended to show how lexical evidence (learned from the training data and given by ) is accumulated over time, for each class, and how it is used to decide when there is enough evidence to identify a subject as “depressed”. These figures should not be (mis)interpreted as trying to capture mood shifts or other typical behaviors in depressive people.

6 Conclusions and Future Work

In this article, we proposed SS3, a novel text classifier that can be used as a framework to build systems for early risk detection (ERD). The SS3’s design aims at dealing, in an integrated manner, with three key challenging aspects of ERD: incremental classification of sequential data, support for early classification and explainability. In this context, we focused here on the two first aspects with a remarkable performance of SS3 (lowest measure) in the experimental work with a very simple criterion for early classification. SS3 showed better results than state-of-the-art methods with a more computationally efficient (O(n)) incremental classification process in two different scenarios, namely: incremental chunk-by-chunk and incremental post-by-post classification. An additional interesting aspect was that it did not rely on (domain-specific) hand-crafted features neither on complex and difficult-to-understand mechanisms for early classification. The SS3’s virtue of being domain-independent contrasted with other effective algorithms for EDD which would require a costly process to adapt them to different problems. Beyond that, we also showed with some intuitive examples, that the incremental/hierarchical nature of SS3 offers interesting support for explaining its rationale.

SS3 is a general and flexible framework that opens many research lines for future works. However, for the sake of clarity, we will focus here only on the more direct/evident ones.

We believe that extending the predictive model by incorporating information related to non-linear aspects of human behavior, such as mood shifts, could help to capture when depression symptoms “wax and wane”. This, for example, could help to detect when symptoms worsen as a means to prevent possible suicide or, if the subject is already diagnosed, to detect when applied therapy is not working. Having access to a dataset with this type of behavioral information would allow us in the future to integrate it into our EDD framework through, for example, a late-fusion ensemble approach.

Besides the limitations described in Subsection 5.2, e.g. those caused by not using other information than text for classification, another limitation in the present work is that we used words as the basic building blocks (i.e. each writing was processed as a Bag of Words) on which our approach begins to process other higher level blocks (like sentences and paragraphs). However, we could have been used different types of terms instead. For instance, word -grams could have helped us to detect important expressions (or collocations) that are not possible to identify as separate words, such as the “kill myself” in (c). Thus, in the future, we will measure how SS3 performs using other types of terms as well.

In the section “Analysis and Discussion” we could observe that the global value was a good estimator of word relevance for each category. We believe that this ability of global value

to weight words could also play an important role as a feature selection method and, therefore, we will compare it against well-known feature selection approaches such as

information gain and chi-square (), among others.

Additionally, the framework flexibility and incremental nature allow SS3 to be extended in very different ways. Some possible alternatives could be the implementation of more elaborate summary operators, , and more effective early stopping criteria. Besides, with the aim of helping users to interpret more easily the reasons behind classification, for instance, for mental health professionals not familiar with the underlying computational aspects, we plan to continue working on better visualization tools.

Finally, the “domain-independent” characteristic of SS3 makes the framework amenable to be applied to other similar ERD tasks like anorexia, rumor or pedophile detection, among others. However, there is no impediment to use SS3 in other general author-profiling tasks (like gender, age or personality prediction) or even in standard text categorization tasks like, for instance, topic categorization.

References

References

  • Almeida et al. (2017) Almeida, H., Briand, A., & Meurs, M.-J. (2017). Detecting early risk of depression from social media user-generated content. In Proceedings Conference and Labs of the Evaluation Forum CLEF.
  • American Psychiatric Association (2013) American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders (DSM-5®). American Psychiatric Pub.
  • De Choudhury et al. (2013a) De Choudhury, M., Counts, S., & Horvitz, E. (2013a). Social media as a measurement tool of depression in populations. In Proceedings of the 5th Annual ACM Web Science Conference (pp. 47–56). ACM.
  • De Choudhury et al. (2013b) De Choudhury, M., Gamon, M., Counts, S., & Horvitz, E. (2013b). Predicting depression via social media. ICWSM, 13, 1–10.
  • Dean & Ghemawat (2008) Dean, J., & Ghemawat, S. (2008). Mapreduce: Simplified data processing on large clusters. Communications of the ACM, 51, 107–113.
  • Dulac-Arnold et al. (2011) Dulac-Arnold, G., Denoyer, L., & Gallinari, P. (2011). Text classification: a sequential reading approach. In European Conference on Information Retrieval (pp. 411–423). Springer.
  • Escalante et al. (2016) Escalante, H. J., Montes-y-Gómez, M., Villaseñor-Pineda, L., & Errecalde, M. L. (2016). Early text classification: a Naive solution. In Proceedings of NAACL-HLT (pp. 91–99). Association for Computational Linguistics.
  • Escalante et al. (2017) Escalante, H. J., Villatoro-Tello, E., Garza, S. E., López-Monroy, A. P., Montes-y Gómez, M., & Villaseñor-Pineda, L. (2017). Early detection of deception and aggressiveness using profile-based representations. Expert Systems with Applications, 89, 99–111.
  • Farıas-Anzaldúa et al. (2017) Farıas-Anzaldúa, A. A., Montes-y Gómez, M., López-Monroy, A. P., & González-Gurrola, L. C. (2017). Uach-inaoe participation at erisk2017. In Proceedings Conference and Labs of the Evaluation Forum CLEF.
  • Guntuku et al. (2017) Guntuku, S. C., Yaden, D. B., Kern, M. L., Ungar, L. H., & Eichstaedt, J. C. (2017). Detecting depression and mental illness on social media: an integrative review. Current Opinion in Behavioral Sciences, 18, 43 – 49. Big data in the behavioural sciences.
  • Iskandar (2017) Iskandar, B. S. (2017). Terrorism detection based on sentiment analysis using machine learning. Journal of Engineering and Applied Sciences, 12, 691–698.
  • Kwon et al. (2017) Kwon, S., Cha, M., & Jung, K. (2017). Rumor detection over varying time windows. PloS one, 12, e0168344.
  • López-Monroy et al. (2018) López-Monroy, A. P., González, F., Montes-y Gómez, M., Escalante, H. J., & Solorio, T. (2018). Early text classification using multi-resolution concept representations. In The 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. NAACL HLT.
  • Losada & Crestani (2016) Losada, D. E., & Crestani, F. (2016). A test collection for research on depression and language use. In International Conference of the Cross-Language Evaluation Forum for European Languages (pp. 28–39). Springer.
  • Losada et al. (2017) Losada, D. E., Crestani, F., & Parapar, J. (2017). erisk 2017: Clef lab on early risk prediction on the internet: Experimental foundations. In International Conference of the Cross-Language Evaluation Forum for European Languages (pp. 346–360). Springer.
  • Loyola et al. (2018) Loyola, J. M., Errecalde, M. L., Escalante, H. J., & y Gomez, M. M. (2018). Learning when to classify for early text classification. Revised Selected Papers. Communications in Computer and Information Science (CCIS), Springer, 790, 24–34.
  • Luhn (1958) Luhn, H. P. (1958). The automatic creation of literature abstracts. IBM Journal of research and development, 2, 159–165.
  • Ma et al. (2016) Ma, J., Gao, W., Mitra, P., Kwon, S., Jansen, B. J., Wong, K.-F., & Cha, M. (2016). Detecting rumors from microblogs with recurrent neural networks. In IJCAI (pp. 3818–3824).
  • Ma et al. (2015) Ma, J., Gao, W., Wei, Z., Lu, Y., & Wong, K.-F. (2015). Detect rumors using time series of social context information on microblogging websites. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management (pp. 1751–1754). ACM.
  • Malam et al. (2017) Malam, I. A., Arziki, M., Bellazrak, M. N., Benamara, F., El Kaidi, A., Es-Saghir, B., He, Z., Housni, M., Moriceau, V., Mothe, J. et al. (2017). Irit at e-risk. In Proceedings Conference and Labs of the Evaluation Forum CLEF.
  • Mariñelarena-Dondena et al. (2017) Mariñelarena-Dondena, L., Ferretti, E., Maragoudakis, M., Sapino, M., & Errecalde, M. L. (2017). Predicting depression: a comparative study of machine learning approaches based on language usage. Panamerican Journal of Neuropsychology, 11.
  • National Center for Health Statistics (2019) National Center for Health Statistics (2019). Mortality in the United States, 2017. https://www.cdc.gov/nchs/products/databriefs/db328.htm. [Online; accessed 13-April-2019].
  • Powers (1998) Powers, D. M. (1998). Applications and explanations of zipf’s law. In Proceedings of the joint conferences on new methods in language processing and computational natural language learning (pp. 151–160). Association for Computational Linguistics.
  • Rude et al. (2004) Rude, S., Gortner, E.-M., & Pennebaker, J. (2004). Language use of depressed and depression-vulnerable college students. Cognition & Emotion, 18, 1121–1133.
  • Sadeque et al. (2017) Sadeque, F., Xu, D., & Bethard, S. (2017). Uarizona at the clef erisk 2017 pilot task: Linear and recurrent models for early depression detection. In Proceedings Conference and Labs of the Evaluation Forum CLEF.
  • Schwartz & Ungar (2015) Schwartz, H. A., & Ungar, L. H. (2015). Data-driven content analysis of social media: a systematic overview of automated methods. The ANNALS of the American Academy of Political and Social Science, 659, 78–94.
  • Shen et al. (2017) Shen, Y., Huang, P.-S., Gao, J., & Chen, W. (2017). Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1047–1055). ACM.
  • Stirman & Pennebaker (2001) Stirman, S. W., & Pennebaker, J. W. (2001). Word use in the poetry of suicidal and nonsuicidal poets. Psychosomatic medicine, 63, 517–522.
  • Trotzek et al. (2017) Trotzek, M., Koitka, S., & Friedrich, C. M. (2017). Linguistic metadata augmented classifiers at the clef 2017 task for early detection of depression. In Proceedings Conference and Labs of the Evaluation Forum CLEF.
  • Tsugawa et al. (2015) Tsugawa, S., Kikuchi, Y., Kishino, F., Nakajima, K., Itoh, Y., & Ohsaki, H. (2015). Recognizing depression from twitter activity. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, Seoul, Republic of Korea, April 18-23, 2015 (pp. 3187–3196).
  • Villatoro-Tello et al. (2017) Villatoro-Tello, E., Ramırez-de-la Rosa, G., & Jiménez-Salazar, H. (2017). Uam’s participation at clef erisk 2017 task: Towards modelling depressed bloggers. In Proceedings Conference and Labs of the Evaluation Forum CLEF.
  • Villegas et al. (2017) Villegas, M. P., Funez, D. G., Garciarena Ucelay, M. J., Cagnina, L. C., & Errecalde, M. L. (2017). Lidic - unsl’s participation at erisk 2017: Pilot task on early detection of depression. In Proceedings Conference and Labs of the Evaluation Forum CLEF.
  • World Health Organization (2014) World Health Organization (2014). Preventing suicide: a global imperative. WHO.
  • World Health Organization (2017) World Health Organization (2017). Depression and other common mental disorders: global health estimates. WHO.
  • Xing et al. (2010) Xing, Z., Pei, J., & Keogh, E. (2010). A brief survey on sequence classification. ACM Sigkdd Explorations Newsletter, 12, 40–48.
  • Yu et al. (2017) Yu, A. W., Lee, H., & Le, Q. V. (2017). Learning to Skim Text. ArXiv e-prints, . arXiv:1704.06877.
  • Yu et al. (2018) Yu, K., Liu, Y., Schwing, A. G., & Peng, J. (2018). Fast and accurate text classification: Skimming, rereading and early stopping. In ICLR 2018 Workshop. URL: https://openreview.net/forum?id=ryZ8sz-Ab.
  • Zipf (1949) Zipf, G. K. (1949). Human Behaviour and the Principle of Least Effort. Addison-Wesley.