How Epidemic Psychology Works on Social Media: Evolution of responses to the COVID-19 pandemic

07/26/2020 ∙ by Luca Maria Aiello, et al. ∙ 0

Disruptions resulting from an epidemic might often appear to amount to chaos but, in reality, can be understood in a systematic way through the lens of "epidemic psychology". According to the father of this research field, Philip Strong, not only is the epidemic biological; there is also the potential for three social epidemics: of fear, moralization, and action. This work is the first study to empirically test Strong's model at scale. It does so by studying the use of language on 39M social media posts in US about the COVID-19 pandemic, which is the first pandemic to spread this quickly not only on a global scale but also online. We identified three distinct phases, which parallel Kuebler-Ross's stages of grief. Each of them is characterized by different regimes of the three social epidemics: in the refusal phase, people refused to accept reality despite the increasing numbers of deaths in other countries; in the suspended reality phase (started after the announcement of the first death in the country), people's fear translated into anger about the looming feeling that things were about to change; finally, in the acceptance phase (started after the authorities imposed physical-distancing measures), people found a "new normal" for their daily activities. Our real-time operationalization of Strong's model makes it possible to embed epidemic psychology in any real-time model (e.g., epidemiological and mobility models).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

In our daily lives, our dominant perception is of order. But every now and then chaos threats that order: epidemics dramatically break out, revolutions erupt, empires suddenly fall, and stock markets crash. Epidemics, in particular, present not only collective health hazards but also special challenges to mental health and public order that need to be addressed by social and behavioral sciences [1]. Almost 30 years ago, in the wake of the AIDS epidemic, Philip Strong, the founder of the sociological study of epidemic infectious diseases, reflected: “the human origin of epidemic psychology lies not so much in our unruly passions as in the threat of epidemic disease to our everyday assumptions.” [2] In the recent COVID-19 pandemic [3] (an ongoing pandemic of a coronavirus disease), it has been shown that the main source of uncertainty and anxiety has indeed come from the disruption of what Alfred Shutz called the “routines and recipes” of daily life [4] (e.g., every simple act, from eating at work to visiting our parents, takes on new meanings).

Yet, the chaos resulting from an epidemic turns out to be more predictable than what one would initially expect. Philip Strong observed that any new health epidemic resulted into three social epidemics: of fear, moralization, and action. The epidemic of fear represents the fear of catching the disease, which comes with the suspicion against alleged disease carriers, which, in turn, may spark panic and irrational behavior. The epidemic of moralization is characterized by moral responses both to the viral epidemic itself and to the epidemic of fear, which may result in either positive reactions (e.g., cooperation) or negative ones (e.g., stigmatization). The epidemic of action accounts for the rational or irrational changes of daily habits that people make in response to the disease or as a result of the two other social epidemics. Strong was writing in the wake of the AIDS/HIV crisis, but he based his model on studies that went back to Europe’s Black Death in the century. Importantly, he showed that these three social epidemics are created by language and incrementally fed through it: language transmits the fear that the infection is an existential threat to humanity and that we are all going to die; language depicts the epidemic as a verdict on human failings and as a divine moral judgment on minorities; and language shapes the means through which people collectively intend to, however pointless, act against the threat.

Hitherto, there has never been any large-scale empirical study of whether the use of language during an epidemic reflects Strong’s model, not least because of lack of data. COVID-19 has recently changed that: it has been the first epidemic in history in which people around the world have been collectively expressing their thoughts and concerns on social media. As such, researchers have had an unprecedented opportunity to study this epidemic in new ways: social media posts have been analyzed in terms of content and behavioral markers [5, 6], and of tracking the diffusion of COVID-related information [7] and misinformation [8, 9, 10, 11]. Search queries have suggested specific information-seeking responses to the pandemic [12]. Psychological responses to COVID-19 have been studied mostly though surveys [13, 14]

. Hitherto there has not been any large-scale empirical study of real-time psycho-linguistic responses to the COVID-19 pandemic in the United States. With this opportunity at hand, we set out to test, for the first time, whether Strong’s model did hold during COVID-19, and did so by studying the use of language on social media at the unprecedented scale of an entire country: that of United States. After operationalizing Strong’s model by using lexicons for psycholinguistic text analysis, upon the collection of 39M tweets about the epidemic from February

to April , we conducted a quantitative analysis on the differences in language style and a thematic analysis of the actual social media posts. The period of analysis starts from the first stages of the pandemic, and ends on the day the federal government announced a plan for reopening the country. During this time, we discovered three distinct phases, which parallels Kuebler-Ross’s stages of grief [15]. Each phase is characterized by different regimes of the three social epidemics. In the first phase (the refusal phase), the social epidemic of fear began. Despite increasing numbers of deaths in other countries, people in US refused to accept reality: they feared the uncertainty created by the disruption of what was considered to be “normal”; focused their moral concerns on others in an act of distancing oneself from others; yet, despite all this, they refused to change the normal course of action. After the announcement of the first death in the country, the second phase (the suspended reality phase) began: the social epidemic of fear intensified while the epidemics of morality and action kicked-off abruptly. People expressed more anger than fear about the looming feeling that things were about to change; focused their moral concerns on oneself in an act of reckoning with what was happening; and suspended their daily activities. After the authorities imposed physical-distancing measures, the third phase (the acceptance phase) took over: the epidemic of fear started to fade away while the epidemics of morality and action turned into more constructive and forward-looking social processes. People expressed more sadness than anger or fear; focused their moral concerns on the collective, promoting pro-social behavior; and found a “new normal” for their daily activities, which ended up being their “normal” activities but physically restricted to homes and own neighborhoods.

The ability to systematically characterize the three social epidemics from the use of language on social media makes it possible to embed epidemic psychology into models currently used to tackle epidemics such as mobility models [16]. To see how, consider that, in digital epidemiology [17, 18], some parameters of epidemic models are initialized or adjusted based on a variety of digital data to account for co-determinants of the spreading process that are hard to quantify with traditional data sources, especially in the first stages of the outbreak. This is particularly useful when modeling social and psychological processes such as risk perception [19, 20]. As a result, in addition to managing the health risk posed by an epidemic, with our operationalization at hand, scientists and, more generally, countries should be equally able to manage potential plagues of fear, morality, and pointless actions.

Results

Max. peak
Keywords Supporting literature Language categories 1 2 3

 

Fear emotional maelstrom swear (liwc) .03 .54 .14
anger (liwc) .07 .14 .16
negemo (liwc) .09 .17 .02
These LIWC categories have been used to analyze complex emotional responses to traumatic events (PTSD) and to characterize the language of people suffering from mental health [21] sadness (liwc) -.09 .04 .19
fear Fear-related words like the ones included in Emolex have been often used to measure fear of both tangible and intangible threats [22, 23] fear (emolex) .07 .05 .03
death (liwc) .45 .16 .03
anxiety, panic The anxiety category of LIWC has been used to study different forms of anxiety in social media [24] anxiety (liwc) .31 .46 -.15
disorientation By definition, the tentative category of LIWC expresses uncertainty [25] tentative (liwc) .02 .10 .03
suspicion Suspicion is often formalized as lack of trust [26] trust (liwc) -.04 .03 .12
irrationality The negate category of LIWC has been used to measure cognitive distorsions and irrational interpretations of reality [27] negate (liwc) .08 .15 .07
religion

Religious expressions from LIWC have been used to study how people appeal to religious entities during moments of hardship 

[28]
religion (liwc) .12 .17 .22
contagion These LIWC categories were used to study the perception of diseases in several types of communities: cancer support groups, people affected by eating disorder, and alcoholics [29, 30, 31] body (liwc) .01 .27 .13
feel (liwc) -.04 .34 .03
Moralization warn, risk avoidance, risk perception A LIWC category used to model risk perception connected to epidemics [5] risk (liwc) .04 .15 .08
polarization, segregation Different personal pronouns have been used to study in- and out-group dynamics and to characterize language markers of racism [32]; personal pronouns and markers of differentiation have been considered in studies on racist language [33] I (liwc) -.10 .49 .26
we (liwc) -.09 .24 .22
they (liwc) .03 .02 .10
differ (liwc) .01 .08 .05
stigmatization, blame, abuse Pronouns I and they were used to quantify blame in personal [34] and political context [35]. Hate speech is associated with they-oriented statements [36] (same categories as previous line)
cooperation, coordination, collective consciousness The moral value of care expresses the will of protecting versus hurting others [37]. Cooperation is often verbalized by referencing the in-group and by expressing affiliation, or sense of belonging [38] affiliation (liwc) -.16 .25 .22
care (moral virtue) -.09 .12 .19
prosocial (prosocial) -.10 .07 .25
faith in authority The moral value of authority expresses the will of playing by the rules of a hierarchy versus challenging it [38] authority (moral virtue) -.04 .11 .13
authority enforcement The power category of LIWC expresses exertion of dominance [25] power (liwc) -.01 .02 .14
Action restrictions, travel, privacy motion (liwc) .02 .04 .06
home (liwc) -.15 .38 .20
work (liwc) -.05 .04 .22
social (liwc) -.06 .09 .08
Daily habits concern mainly people’s experience of home, work, leisure, and movement between them [39] leisure (liwc) .05 .16 .10
Table 1: Operationalization of the Strong’s epidemic psychology theoretical framework. From Strong’s paper, three annotators extracted keywords that characterize the three social epidemics and mapped them to relevant language categories from existing language lexicons used in psychometric studies. Category names are followed by the name of their corresponding lexicon in parenthesis. We support the association between keywords and language categories with examples of supporting literature. To summarize how the use of the language categories varies across the three temporal states, we computed the peak values of the different language categories (days when their standardized fractions reached the maximum), and reported the percentage increase at peak compared to the average over the whole time period; in each row, the maximum value is highlighted in bold.

Coding Strong’s model

Back in the 1990s, Philip Strong was able not only to describe the psychological impact of epidemics on social order but also to model it. He observed that the early reaction to major fatal epidemics is a distinctive psycho-social form and can be modeled along three main dimensions: fear, morality, and action. During a large-scale epidemic, basic assumptions about social interaction and, more generally, about social order are disrupted, and, more specifically, they are so by: the fear of others, competing moralities, and the responses to the epidemic. Crucially, all these three elements are created, transmitted, and mediated by language: language transmits fears, elaborates on the stigmatization of minorities, and shapes the means through which people collectively respond to the epidemic [2, 40, 41].

We operationalized Strong’s epidemic psychology theoretical framework in two steps. First, three authors hand-coded Strong’s seminal paper [2] using line-by-line coding [42] to identify keywords that characterize the three social epidemics. For each of the three social epidemics, the three authors generated independent lists of keywords that were conservatively combined by intersecting them. The words that were left out by the intersection were mostly synonyms (e.g., “catching disease” as a synonym for “contagion”), so we did not discard any important concept. According to Strong, the three social epidemics are intertwined and, as such, the concepts that define one specific social epidemic might be relevant to the remaining two as well. For example, suspicion is an element of the epidemic of fear but is tightly related to stigmatization as well, a phenomenon that Strong describes as typical of the epidemic of moralization. In our coding exercise, we adhered as much as possible to the description in Strong’s paper and obtained a strict partition of keywords across social epidemics. In the second step, the same three authors mapped each of these keywords to language categories, namely sets of words that reflect how these concepts are expressed in natural language (e.g., words expressing anger or trust). We took these categories from existing language lexicons widely used in psychometric studies: the Linguistic Inquiry Word Count (LIWC) [25], Emolex [43], the Moral Foundation Lexicon [37], and the Prosocial Behavior Lexicon [44]. The three authors grouped similar keywords together and mapped groups of keywords to one or more language categories. This grouping and mapping procedure was informed by previous studies that investigated how these keywords are expressed through language. These studies are listed in Table 1.

Temporal analysis

To find occurrences of these language categories in our Twitter data, we matched them against the text in each tweet. We considered that a tweet contains a language category , if at least one of the tweet’s words (or word stems) belonged to that category. For each day, we computed the fraction of users who posted at least one tweet containing a given language category over the total number of users who tweeted during that day. We experimentally checked that each day had a number of data points sufficient to obtain valid metrics (i.e., the minimum number of distinct users per day is above 72K across the whole period of study). To allow for a fair comparison across categories, we

-standardized each fraction by computing the number of standard deviations from the fraction’s whole-period average.

Figure 1 (A-C) shows how the standardized fractions of all the language categories changed over time. The cell color encodes values higher than the average in red, and lower in blue. We partitioned the language categories according to the three social epidemics. To identify phases characterized by different combinations of the language categories, we determined change-points—periods in which the standardized fractions considerably vary across all categories at once. To quantify such variations, we computed the daily average squared gradient of the standardized fractions of all the language categories. The squared gradient is a measure of the rate of instantaneous change (increase or decrease) of a given point in a time series [45]. Figure 1 D shows the value of the average squared gradient over time; peaks in the curve represent the days of high local variation. We marked the peaks above one standard deviation from the mean as change-points. We found two change-points that coincide with two key events: February , the day of the announcement of the first infection in the country; and March , the day of the announcement of the ‘stay at home’ orders. These change-points identify three phases, which are described next by dwelling on the peaks of the different language categories (days when their standardized fractions reached the maximum) and reporting the percentage increase at peak (the increase is compared to the average over the whole period of study, and its peak is denoted by ‘max peak’ in Table 1). The first phase (refusal phase) was characterized by anxiety and fear. Death was frequently mentioned, with a peak on February 11 of +45% compared to its average during the whole time period. The pronoun they was used in this temporal state more than average; this suggests that the focus of discussion was on the implications of the viral epidemic on ‘others’, as this was when no infection had been discovered in US yet. All other language categories exhibited no significant variations, which reflected an overall situation of ‘business-as-usual.’

The second phase (suspended reality phase) began on February with an outburst of negative emotions (predominantly anger), right after the first COVID-19 contagion in US was announced. The abstract fear of death was replaced by expressions of concrete health concerns, such as words expressing risk, and mentions of how body parts did feel. On March , the federal government announced the state of national emergency, followed by the enforcement of state-level ‘stay at home’ orders. During those days, we observed a sharp increase of the use of the pronoun I and of swear words (with a peak of +54% on March ), which hints at a climate of discussion characterized by conflict and polarization. At the same time, we observed an increase in the use of words related to the daily habits affected by the impending restriction policies, such as motion, social activities, and leisure. The mentions of words related to home peaked on March (+38%), the day when the federal government announced social distancing guidelines to be in place for at least two weeks.

The third phase (acceptance phase) started on March , the day after the first physically-distancing measures were imposed by law. The increased use of words of power and authority likely reflected the emergence of discussion around the new policies enforced by government officials and public agencies. As the death toll raised steadily—hitting the mark of 1,000 people on March —expressions of conflict faded away, and words of sadness became predominant. In those days of hardship, a sentiment of care for others and expressions of prosocial behavior became more frequent (+19% and +25%, respectively). Last, mentions of work-related activities peaked as many people either lost their job, or were compelled to work from home as result of the lockdown.

Figure 1: The Epidemic Psychology on Twitter. (A-C) evolution of the use of different language categories over time in tweets related to COVID-19. Each row in the heatmaps represents a language category (e.g., words expressing anxiety) that our manual coding associated with one of the three social epidemics. The cell color represents the daily standardized fraction of people who used words related to that category: values that are higher than the average are red and those that are lower are blue. Categories are partitioned in three groups according to the type of social epidemics they model: Fear, Morality, and Action. (D) average gradient (i.e., instantaneous variation) of all the language categories; the peaks of gradient identify change-points - dates around which a considerable change in the use of multiple language categories happened at once. The dashed vertical lines that cross all the plots represent these change-points. (E-H)

temporal evolution of four families of indicators we used to corroborate the validity of the trends identified by the language categories. We checked internal validity by comparing the language categories with a custom keyword-search approach and two deep-learning NLP tools that extract types of social interactions and mentions of medical symptoms. We checked external validity by looking at mobility patterns in different venue categories as estimated by the GPS geo-localization service of the Foursquare mobile app. The timeline at the bottom of the figure marks some of the key events of the COVID-19 pandemic in US such as the announcements of the first infection of COVID-19 recorded.

Thematic analysis

The language categories capture broad concepts related to Strong’s epidemic psychology theory, but they do not allow for an analysis of the fine-grained topics within each category. To study them, for each of the 87 combinations of language category and phase (29 language categories, for 3 phases), we listed the 100 most retweeted tweets (e.g., most popular tweets containing anxiety posted in the refusal phase). To identify overarching themes, we followed two steps that are commonly adopted in thematic analysis [46, 47]. We first applied open coding to identify key concepts that emerged across multiple tweets; specifically, one of the authors read all the tweets and marked them with keywords that reflected the key concepts expressed in the text. We then used axial coding to identify relationships between the most frequent keywords to summarize them in semantically cohesive themes. Themes were reviewed in a recursive manner rather than linear, by re-evaluating and adjusting them as new tweets were parsed. Table 2 summarizes the most recurring themes, together with some of their representative tweets. The thematic analysis revealed that the topics discussed in the three phases resemble the five stages of grief [15]: the refusal phase was characterized by denial, the suspended reality phase by anger mixed with bargaining, and the acceptance phase by sadness together with forbearance. More specifically, in the refusal phase, statements of skepticism were re-tweeted widely (Table 2, row 1). The epidemic was frequently depicted as a “foreign” problem (r. 2) and all activities kept business as usual (r. 3).

In the suspended reality phase, the discussion was characterized by outrage against three main categories: foreigners (r. 4), political opponents (r. 5), and people who adopted different behavioral responses to the outbreak (r. 6). This level of conflict corroborates Strong’s postulate of the “war against each other”. Science and religion were two prominent topics of discussion. A lively debate raged around the validity of scientists’ recommendations (r. 7). Some social groups put their hopes on God rather than on science (r. 8). Mentions of people self-isolating at home became very frequent, and highlighted the contrast between judicious individuals and careless crowds (r. 9).

Finally, during the acceptance phase, the outburst of anger gave in to the sorrow caused by the mourning of thousands of people (r. 10). By accepting the real threat of the virus, people were more open to find collective solutions to the problem and overcome fear with hope (r. 11). Although the positive attitude towards the authorities seemed prevalent, some people expressed disappointment against the restrictions imposed (r. 12). Those who were isolated at home started imagining a life beyond the isolation, especially in relation to reopening businesses (r. 13).

Theme Example tweets

 

The refusal phase
1 denial “Less than 2% of all cases result in death. Approximately equivalent to seasonal flu. Relax people.”
2 they-focus “We will continue to call it the #WuhanVirus, which is exactly what it is.”
3 business as usual “Agriculture specialists at Dulles airport continue to protect our nation’s vital agricultural resources.”
The suspended reality phase
4 anger vs. foreigners “Is there anything you won’t use to stir up hatred against the foreigner? #COVID19 is a global pandemic.”
5 anger vs. political opponents “A new level of sickness has entered the body politic. The son of the monster mouthing off grotesque lies about Dems cheering #coronavirus and Wall Street crashing because we want an end to his father’s winning streak.”
6 anger vs. each other “Coronavirus or not, if you are ill, stay the f**k home. You’re not a hero for going to work when you are unwell.”
7 science debate “When it comes to how to fight #CoronavirusPandemic, I’m making my decisions based on healthcare professionals like Dr. Fauci and others, not political punditry”
8 religion “no problem is too big for God to handle […] with God’s help, we will overcome this threat.”
9 I-focus, home “People get upset and annoyed at me when I tweet about the coronavirus, when I urge people to stay in and avoid crowds”, “I am in the high risk category for coronavirus so do me a favor […] beg others to stay at home”
The acceptance phase
10 sadness “We deeply mourn the 758 New Yorkers we lost yesterday to COVID-19. New York is not numb. We know this is not just a number—it is real lives lost forever.”
11 we-focus, hope “We are thankful for Japan’s friendship and cooperation as we stand together to defeat the #COVID19 pandemic.”, “During tough times, real friends stick together. The U.S. is thankful to #Taiwan for donating 2 million face masks to support our healthcare ”, “Now more than ever, we need to choose hope over fear. We will beat COVID-19. We will overcome this. Together.”
12 authority “You can’t go to church, buy seeds or paint, operate your business, run on a beach, or take your kids to the park. You do have to obey all new ‘laws’, wear face masks in public, pay your taxes. Hopefully this is over by the 4th of July so we can celebrate our freedom.
13 resuming work “We need to help as many working families and small businesses as possible. Workers who have lost their jobs or seen their hours slashed and families who are struggling to pay rent and put food on the table need help immediately. There’s no time to waste.”
Table 2: Recurring themes in the three phases, found by the means of thematic analysis of tweets. Themes are paired with examples of popular tweets.

Comparison with other behavioral markers

To assess the validity of our approach, we compared the previous results with the output of alternative text-mining techniques applied to the same data (internal validity), and with people’s mobility in the real world (external validity).

Correlation with phases
Marker Most correlated language categories Refusal Suspended reality Acceptance
Custom words Alcohol body (0.70) feel (0.62) home (0.58) -0.43 0.46 -0.12
Economic anxiety (0.73) negemo (0.68) negate (0.56) -0.12 0.37 -0.53
Exercising affiliation (0.95) posemo (0.93) we (0.92) -0.62 0.31 0.89
Interactions Conflict anxiety (0.88) death (0.57) negemo (0.54) 0.58 -0.24 -0.92
Support affiliation (0.98) posemo (0.96) we (0.94) -0.68 0.37 0.90
Power prosocial (0.95) care (0.94) authority (0.94) -0.48 0.18 0.88
Medical Physical health swear (0.83) feel (0.77) negate (0.67) -0.66 0.81 -0.32
Mental health affiliation (0.91) we (0.88) posemo (0.85) -0.65 0.36 0.85
Mobility Travel death (0.59) anxiety (0.58) 0.62 -0.32 -0.82
Grocery I (0.80) leisure (0.72) home (0.64) -0.77 0.70 0.29
Outdoors sad (0.68) posemo (0.65) affiliation (0.59) -0.62 0.39 0.72
Table 3: (Left) Correlation of our language categories with behavioral markers computed with alternative techniques and datasets. For each marker, the three categories with strongest correlations are reported, together with their Pearson correlation values in parenthesis. (Right) Pearson correlation between values for our behavioral markers and “being” in a given phase or not. Values in bold indicate the highest values for each marker across the three phases. All reported correlations are statistically significant ().

Comparison with other text mining techniques

We processed the very same social media posts with three alternative text-mining techniques (Figure 1 E-G). In Table 3, we reported the three language categories with the strongest correlations with each behavioral marker.

First, to allow for interpretable and explainable results, we applied a simple word-matching method that relies on a custom lexicon containing three categories of words reflecting consumption of alcohol, physical exercising, and economic concerns, as those aspects have been found to characterize the COVID-19 pandemic [48]. We measured the daily fraction of users mentioning words in each of those categories (Figure 1 E). In the refusal phase, the frequency of any of these words did not significantly increase. In the suspended reality phase, the frequency of words related to economy peaked, and that related to alcohol consumption peaked shortly after that. Table 3 shows that economy-related words were highly correlated with the use of anxiety words (), which is in line with studies indicating that the degree of apprehension for the declining economy was comparable to that of health-hazard concerns [49, 50]. Words of alcohol consumption were most correlated with the language dimensions of body (), feel (), home (); in the period were health concerns were at their peak, home isolation caused a rising tide of alcohol use [51, 52]. Finally, in the acceptance phase, the frequency of words related to physical exercise was significant; this happened at the same time when the use of positive words expressing togetherness was at its highest—affiliation (), posemo (), we (). All these results match our previous interpretations of the peaks for our language categories.

Second, since it is unclear whether using a standard word count analytic system would allow for the distinction among the three different types of social epidemics, we used a deep-learning Natural Language Processing tool that mines conversations according to how humans understand them in the real world 

[53]

. The tool can classify any textual message according to types of interaction that are close to human-level understanding. In particular, we studied over time the three types most frequently found: expressions of

conflict (expressions of contrast or diverging views), social support (emotional aid and companionship), and power (expressions that denote or describe person’s power over the behavior and outcomes of another). Figure 1 F shows the min-max normalized scores of the fraction of people posting tweets labeled with each of these three interaction types. In refusal phase, conflict increased—this is when anxiety and blaming foreigners were recurring themes in Twitter. In the suspended reality phase, conflict peaked (similar to anxiety words, ), yet, since this when the first lock-down measures were announced, initial expressions of power and of social support gradually increased as well. Finally, in the acceptance phase, social support peaked. Support was most correlated with the categories of affiliation (), positive emotions (), and we () (Table 3); power was most correlated with prosocial (), care (), and authority (). Again, our previous interpretations concerning the existence of a phase of conflict followed by a phase of social support were further confirmed by the deep-learning tool, which, as opposed to our dictionary-based approaches, does not rely on word matching.

Third, we used a deep-learning tool that extracts mentions of medical entities from text [54]. When applied to a tweet, the tool accurately extracts medical symptoms in the form of -grams extracted from the tweet’s text (e.g., “cough”, “feeling sick”). Out of all the entities extracted, we focused on the 100 most frequently mentioned and grouped them into two families of symptoms, respectively, those related to physical health (e.g., “fever”, “cough”, “sick”) and those related to mental health (e.g., “depression”, “stress”) [3]. The min-max normalized fractions of people posting tweets containing mentions of these symptoms are shown in Figure 1 G. In refusal phase, the frequency of symptom mentions did not change. In the suspended reality phase, instead, physical symptoms started to be mentioned, and they were correlated with the language categories expressing panic and physical health concerns—swear (), feel (), and negate (). In the acceptance phase, mentions of mental symptoms became most frequent. Interestingly, mental symptoms peaked when the Twitter discourse was characterized by positive feelings and prosocial interactions—affiliation (), we (), and posemo (); this is in line with recent studies that found that the psychological toll of COVID-19 has similar traits to post-traumatic stress disorders and its symptoms might lag several weeks from the period of initial panic and forced isolation [55, 56, 57].

Comparison with mobility traces

To test for the external validity of our language categories, we compared their temporal trends with mobility data. We used the data collection that Foursquare made publicly available in response to the COVID-19 crisis through the visitdata.org website. The data consists of the daily number of people in US visiting each of 35 venue types, as estimated by the GPS geo-localization service of the Foursquare mobile app. We picked three venue categories: Grocery shops, Travel & Transport, and Outdoors & Recreation to reflect three different types of fundamental human needs [58]: the primary need of getting food supplies, the secondary need of moving around freely (or to limit mobility for safety), and the higher-level need of being entertained. In Figure 1 H, we show the min-max normalized number of visits over time. The periods of higher variations of the normalized number of visits match the transitions between the three phases. In the refusal phase, people’s mobility did not change. In the suspended reality phase, instead, travel started to drop, and grocery shopping peaked, supporting the interpretation of a phase characterized by a wave of panic-induced stockpiling and a compulsion to save oneself—it co-occurred with the peak of use of the pronoun I ()—rather than helping others. Finally, in the acceptance phase, the panic around grocery shopping faded away, and the number of visits to parks and outdoor spaces increased.

Embedding epidemic psychology in real-time models

To embed our operationalization of epidemic psychology into real-time models (e.g., epidemiological models, urban mobility models), our measures need to work at any point in time during a new pandemic, yet, given their current definitions, they do not: that is because they are normalized values over the whole period of study (Figure 1 A-C). To fix that, we designed a new composite measure that does not rely on full temporal knowledge, and a corresponding detection method that determines which of the three phases one is in at any given point in time.

For each phase, this parsimonious measure is composed of the language dimensions that positively and negatively characterize the phase. More specifically, it is composed of two dimensions: the dimension most positively associated with the phase (expressed in percent change) minus that most negatively associated with it (e.g., (death - I) for the refusal phase).

To identify such dimensions, we trained three logistic regression binary classifiers (one per phase) that use the percent changes of all the language dimensions at time

to estimate the probability that

belongs to phase (). The on average, the classifiers were able to identify the correct phase for 98% of the days.

The regressions coefficients were then used to rank the language category by their predictive power. Table 4 shows the top three positive beta coefficients and bottom three negative ones for each of the three phases. For each phase, we subtracted the top category from the bottom category without considering their beta coefficients, as these would require, again, full temporal knowledge. The top and bottom categories of all phases belong to the LIWC lexicon.

The resulting composite measure has change-points (Figure 2) similar to the full-knowledge measure’s (Figure 1), suggesting that the real-time and parsimonious computation does not compromise the original trends. In a real-time scenario, transition between phases are captured changes of the dominant measure; for example, when the refusal curve is overtaken by the suspended reality curve. In addition, we correlated the composite measures with each of the behavioral markers we used for validation (Figure 1 E-H) to find which are the markers that are most typical of each of the phases. We reported the correlations in Table 3. During the refusal phase, conflictual interactions were frequent () and long-range mobility was common (); during the suspended reality phase, as mobility reduced [59, 60], people hoarded groceries and alcohol [51, 52] and expressed concerns for their physical health () and for the economy [49, 50]; last, during the acceptance phase, people ventured outdoors, started exercising more, and expressed a stronger will to support each other (), in the wake of a rising tide of deaths and mental health symptoms ([55, 56, 57].

Figure 2: Evolution of three measures representing the relative frequency of a selected subset language categories associated with each of the three phases of refusal, suspended reality, and acceptance.
Phase Top positive Top negative
Refusal death (0.66) they (0.06) fear (0.04) I (-1.51) we (-1.27) home (-1.22)
Suspended reality swear (2.17) feel (1.51) anxiety (1.46) death (-0.70) sadness (-0.51) prosocial (-0.38)
Acceptance sad (1.35) affiliation (1.19) prosocial (1.17) anxiety (-1.62) swear (-1.36) I (-0.34)
Table 4: Top three positive and bottom negative beta coefficients of the logistic regression models for the three phases. The categories in bold are those included in our composite temporal score.

Discussion

Implications

New infectious diseases break out abruptly, and public health agencies try to rely on detailed planning yet often find themselves to improvise around their playbook. They are constantly confronting not only the health epidemic but also the three social epidemics. Measuring the effects of epidemics on societal dynamics and population mental health has been an open research problem for long, and multidisciplinary approaches have been called for [61]. As our method is easy to use, and can be applied to any public stream of data, it has a direct practical implication on improving the ability to monitor whether people’s behavior and perceptions align with or divert from the expectations and recommendations of governments and experts, thus informing the design of more effective interventions [1]. Since our language categories are not tailored to a specific epidemic (e.g., they do not reflect any specific symptom an epidemic is associated with), our approach can be applied to a future epidemic, provided that the set of relevant hash-tags associated with the epidemic is known; this is a reasonable assumption to make though, considering that the consensus on Twitter hash-tags is reached quickly [62], and that several epidemics that occurred in the last decade sparked discussions on Twitter since their early days [63, 64, 65]. Our method could complement the numerous cross-sectional studies on the negative psychological impact of health epidemics [66, 3]. Those studies are usually conducted on a small to medium scale and are costly to carry out; our approach could integrate them with real-time insights from large-scale data. For computer science researchers, our method could provide a starting point for developing more sophisticated tools for monitoring social epidemics. Furthermore, from the theoretical standpoint, our work provides the first operationalization of Strong’s theoretical model of the epidemic psychology and shows its applicability to social media data. Furthermore, starting from Strong’s epidemic psychology, our analysis showed the emergence of phases that parallel Kuebler-Ross’s stages of grief. This demonstrates the centrality of the psychological responses to major life trauma in parallel with any potential physical danger. Thus, future research could integrate and apply the two perspectives not just to pandemics, but to large scale disasters and other tragedies. Finally, and more importantly, our real-time operationalization of Strong’s model makes it possible to embed epidemic psychology in any real-time models for the first time.

Limitations

Future work could improve our work in five main aspects. First, we focused only on one viral epidemic, without being able to compare it to others. That is mainly because no other epidemic had an online scale comparable to COVID-19. Yet, if one were to obtain past social media data during the outbreaks of diseases like Zika [63], Ebola [64], and the H1N1 influenza [65], one could apply our methodology in those contexts as well, and identify similarities and differences. For example, one could study how mortality rates or speed of spreading influence the representation of Strong’s epidemic psychology on social media.

Second, our geographical focus was the entire United States and, as such, was coarse and limited in scope. Our collected data did not provide a sufficient coverage for each individual state in the US. If we were to obtain such high-coverage data, we could relate differences between states to large-scale events (e.g., a governor’s decisions, prevalence of cases, media landscape, and residents’ cultural traits). In particular, recent studies suggested that the public reaction to COVID-19 varied across US states depending on their political leaning [67, 68]. One could also apply our methodology to other English-speaking countries, to investigate how cultural dimensions [69] and cross-cultural personality trait variations [70] might influence the three social epidemics.

Third, the period of study is limited yet proved to be sufficient to discover a clear sequence of collective psychological phases. Future work could explore longer periods to ultimately assess the social epidemics’ long-term effects.

Fourth, our study is limited to Twitter, mainly because Twitter is the largest open stream of real-time social media data. The practice of using Twitter as a way of modeling the psychological state of a country carries its own limitations. Despite having a rather high penetration in the US (around 20% of adults, according to the latest estimates[71]), its user base is not representative of the general population [72]. Additionally, Twitter is notoriously populated by bots [73, 74], automated accounts that are often used to amplify specific topics or view points. Bots played an important role to steer the discussion on several events of broad public interest [75, 76], and it is reasonable to expect that they have a role in COVID-related discussions too, as some recent studies seem to suggest [11]. To partly discount their impact, since they tend to have anomalous levels of activity (especially retweeting [75]), we performed two tests. First, we computed all our measures at user-level rather than tweet-level, which counter anomalous levels of activity. Second, we replicated our temporal analysis excluding retweets, and obtained very similar results. In the future, one could attempt to adapt our framework to different sources of online data, for example to web search queries—which have proven useful to identify different phases of the public reactions to the COVID-19 pandemic [77].

Last, as Strong himself acknowledged in his seminal paper: “any sharp separation between different types of epidemic psychology is a dubious business.” Our work has operationalized each social epidemic independently. In the future, modeling the relationships among the three epidemics might identify hitherto hidden emergent properties.

Methods

Twitter data collection

We collected tweets related to COVID-19 from two sources. First, from an existing dataset of 129,911,732 COVID-related tweets [78], we gathered 57,287,490 English tweets posted between February up to April by 11,318,634 unique users. We augmented this dataset with our own collection of Tweets obtained by querying the Twitter Streaming API continuously from March until April using a list of keywords aligned with the previous data collection [78]: coronavirus, covid19, covid_19, coronaviruslockdown, coronavirusoutbreak, herd immunity, herdimmunity. The Streaming API returns a sample of up to of all tweets. This second crawl got us 96,576,543 English tweets. By combining the two collections, we obtained 143,325,623 unique English tweets posted by 17,862,493 users. As we shall discuss in the remainder of this section, we normalized all our measures so that they are not influenced by the fluctuating volume of tweets over time.

We focused our analysis on the United States, the country where Twitter penetration is highest. To identify Twitter users living in it, we parsed the free-text location description of their user profile (e.g., “San Francisco, CA”). We did so by using a set of custom regular expressions that match variations for the expression “United States of America”, as well as the names of 333 US cities, and 51 US states (and their combinations). Albeit not always accurate, matching location strings against known location names is a tested approach that yields good results for a coarse-grained localization at state or country-level [79]. Overall, we located 3,710,489 unique users in US who posted 38,950,828 tweets; this is the final dataset we used for the analysis.

The number of active users per day varies from a minimum of 72k on February to a maximum of 1.84M on March , with and average of 437k. The median number of tweets per user during the whole period is 2. A small number of accounts tweeted a disproportionately high number of times, reaching a maximum of 15,823 tweets; those were clearly automated accounts, which were discarded by our approach.

Language lexicons

We selected our language categories from four lexicons:

Linguistic Inquiry Word Count (LIWC) [25]. A lexicon of words and word stems grouped into over 125 categories reflecting emotions, social processes, and basic functions, among others. The LIWC lexicon is based on the premise that the words people use to communicate can provide clues to their psychological states [25]. It allows written passages to be analyzed syntactically (how the words are used together to form phrases or sentences) and semantically (an analysis of the meaning of the words or phrases).

Emolex [43]. A lexicon that classifies 6k+ words and stems into the eight primary emotions of Plutchik’s psychoevolutionary theory [80].

Moral Foundation Lexicon [37]. A lexicon of 318 words and stems, which are grouped into 5 categories of moral foundations [81]: harm, fairness, in-group, authority, and purity. Each of which is further split into expressions of virtue or vice.

Pro-social behavior [44]. A lexicon of 146 pro-social words and stems, which have been found to be frequently used when people describe pro-social goals[44].

Language categories over time

We considered that a tweet contained a language category if at least one of the tweet’s words or stems belonged to that category. The tweet-category association is binary and disregards the number of matching words within the same tweet. That is mainly because, in short snippets of text (tweets are limited to 280 characters), multiple occurrences are rare and do not necessarily reflect the intensity of a category [82]. For each language category , we counted the number of users who posted at least one tweet at time containing that category. We then obtained the fraction of users who mentioned category by dividing by the total number of users who tweeted at time :

(1)

Computing the fraction of users rather than the fraction of tweets prevents biases introduced by exceptionally active users, thus capturing more faithfully the prevalence of different language categories in our Twitter population. This also helps discounting the impact of social bots, which tend to have anomalous levels of activity (especially retweeting [75]).

Different categories might be verbalized with considerably different frequencies. For example, the language category “I” (first-person pronoun) from the LIWC lexicon naturally occurred much more frequently than the category “death” from the same lexicon. To enable a comparison across categories, we standardized all the fractions:

(2)

where and represent the mean and standard deviation of the scores over the whole time period, from (February ) to (April ). These -scores ease also the interpretation of the results as they represent the relative variation of a category’s prevalence compared to its average: they take on values higher (lower) than zero when the original value is higher (lower) than the average.

Comparison with interaction types

We compared the results obtained via word-matching with a state-of-the-art deep learning tool for Natural Language Processing designed to capture fundamental types of social interactions from conversational language [53]

. This tool uses Long Short-Term Memory neural networks (LSTMs) 

[83] that take in input a 300-dimensional GloVe representation of words [84] and output a series of confidence scores in the range that estimate the likelihood that the text expresses certain types of social interactions. The classifiers exhibited a very high classification performance, up to an Area Under the ROC Curve (AUC) of 0.98. AUC is a performance metric that measures the ability of the model to assign higher confidence scores to positive examples (i.e., text characterized by the type of interaction of interest) than to negative examples, independent of any fixed decision threshold; the expected value for random classification is 0.5, whereas an AUC of 1 indicates a perfect classification.

Out of the ten interaction types that the tool can classify [85], only three were detected frequently with likelihood in our Twitter data: conflict (expressions of contrast or diverging views [86]), social support (giving emotional or practical aid and companionship [87]), and power (expressions that mark a person’s power over the behavior and outcomes of another [88]).

Given a tweet’s textual message and an interaction type , we used the classifier to compute the likelihood score

that the message contains that interaction type. We then binarized the confidence scores using a threshold-based indicator function:

(3)

Following the original approach [53], we used a different threshold for each interaction type, as the distributions of their likelihood scores tend to vary considerably. We thus picked conservatively as the value of the percentile of the distribution of the confidence scores , thus favoring precision over recall. Last, similar to how we constructed temporal signals for the language categories, we counted the number of users who posted at least one tweet at time that contains interaction type . We then obtained the fraction of users who mentioned interaction type by dividing by the total number of users who tweeted at time :

(4)

Last, we min-max normalized these fractions, considering the minimum and maximum values during the whole time period :

(5)

Comparison with mentions of medical entities

To identify medical symptoms on Twitter in relation to COVID-19, we resorted to a state-of-the-art deep learning method for medical entity extraction [54]. When applied to tweets, the method extracts -grams representing medical symptoms (e.g., “feeling sick”). This method is based on the Bi-LSTM sequence-tagging architecture introduced by Huang et al. [89] in combination with GloVe word embeddings [84] and RoBERTa contextual embeddings [90]. To optimize the entity extraction performance on noisy textual data from social media, we trained its sequence-tagging architecture on the Micromed database [91], a collection of tweets manually labeled with medical entities. The hyper-parameters we used are: hidden units, a batch size of , and a learning rate of which we gradually halved whenever there was no performance improvement after epochs. We trained for a maximum of epochs or before the learning rate became too small (). The final model achieved an F1-score of on Micromed. The F1-score is a performance measure that combines precision (the fraction of extracted entities that are actually medical entities) and recall (the fraction of medical entities present in the text that the method is able to retrieve). We based our implementation on Flair [92]

and Pytorch 

[93], two popular deep learning libraries in Python.

For each unique medical entity we counted the number of users who posted at least one tweet at time that mentioned that entity. We then obtained the fraction of users who mentioned medical entity by dividing by the total number of users who tweeted at time :

(6)

Last, we min-max normalize these fractions, considering the minimum and maximum values during the whole time period :

(7)

Comparison with mobility traces

Foursquare is a local search and discovery mobile application that relies on the users’ past mobility records to recommend places user might may like. The application uses GPS geo-localization to estimate the user position and to infer the places they visited. In response to the COVID-19 crisis, Foursquare made publicly available the data gathered from a pool of 13 million US users. These users were “always-on” during the period of data collection, meaning that they allowed the application to gather geo-location data at all times, even when the application was not in use. The data (published through the visitdata.org website) consists of the daily number of people visiting any venue of type in state , starting from February to the present day (e.g., 419,256 people visiting Schools in Indiana on February ). Overall, 35 distinct location categories are provided. To obtain country-wide temporal indicators, we first applied a min-max normalization to the values:

(8)

We then averaged the values across all states:

(9)

where is the total number of states. By weighting each state equally, we obtained a measure that is more representative of the whole US territory, rather than being biased towards high-density regions.

Time series smoothing

All our temporal indicators are affected by large day-to-day fluctuations. To extract more consistent trends out of our time series, we applied a smoothing function—a common practice when analyzing temporal data extracted from social media [94]. Given a time-varying signal , we apply a “boxcar” moving average over a window of the previous days:

(10)

We selected a window of one week (). Weekly time windows are typically used to smooth out both day-to-day variations as well as weekly periodicities [94]. We applied the smoothing to all the time series: the language categories (), the mentions of medical entities (), the interaction types (), and the foursquare visits ().

Change-point detection

To identify phases characterized by different combinations of the language categories, we identified change-points—periods in which the values of all categories varied considerably at once. To quantify such variations, for each language category , we computed , namely the daily average squared gradient of the smoothed standardized fractions of that category [95]. To calculate the gradient, we used the Python function numpy.gradient. The gradient provides a measure of the rate of increase or decrease of the signal; we consider the absolute value of the gradient, to account for the magnitude of change rather than the direction of change. To identify periods of consistent change as opposed to quick instantaneous shifts, we apply temporal smoothing (Equation. 10) also to the time-series of gradients, and we denote the smoothed squared gradients with . Last, we average the gradients of all language categories to obtain the overall gradient over time:

(11)

Peaks in the time series represent the days of highest variation, and we marked them as change-points. Using the Python function scipy.signal.find_peaks, we identified peaks as the local maxima whose values is higher than the average plus one standard deviation, as it is common practice [96].

Real-time indicators

For each language category , we first computed the average value of during the first day of the epidemic, specifically . During the first day, 86k users tweeted. We experimented with longer periods (up to a week and 0.4M users), and obtained qualitatively similar results. We used the averages computed on this initial period as reference values for later measurements. The assumption behind this approach is that the modeler would know the set of relevant hashtags in the initial stages of the pandemic, which is reasonable considering that this was the case for all the major pandemics occurred in the last decade [63, 64, 65].

Starting from the second day, we then calculated the percent change of the values compared to the historical average:

(12)

Finally, we combined the values of the selected categories to create measures that capture the average relative change of the prevalence of verbal expressions typical of each of the three temporal phases:

(13)
(14)
(15)

Those categories were selected among those that proved to be more predictive of a given phase. Specifically, we trained three logistic regression classifiers (one per phase). For each phase, we marked with label 1 all the days that were included in that phase and with 0 those that were not. Then, we trained a logistic regression classifier to predict the label of day out of the values for all categories. During training, the logistic regression classifier learned coefficients for each of the categories. We included in Equations 13, 14, and 15 the categories with the top positive and top negative coefficients.

Acknowledgments

We thank Sarah Konrath, Rosta Farzan, and Licia Capra for their useful feedback on the manuscript.

References

  • [1] Van Bavel, J. J. et al. Using social and behavioural science to support covid-19 pandemic response. Nature Human Behaviour 1–12 (2020).
  • [2] Strong, P. Epidemic psychology: a model. Sociology of Health & Illness 12, 249–259 (1990).
  • [3] Brooks, S. K. et al. The psychological impact of quarantine and how to reduce it: rapid review of the evidence. The Lancet (2020).
  • [4] Schutz, A., Luckmann, T., Zaner, R. & Engelhardt, J. The Structures of the Life-world. No. v. 1 in Northwestern University studies in phenomenology & existential philosophy (Northwestern University Press, 1973). URL https://books.google.co.uk/books?id=LGXBxI0Xsh8C.
  • [5] Hou, Z., Du, F., Jiang, H., Zhou, X. & Lin, L. Assessment of public attention, risk perception, emotional and behavioural responses to the covid-19 outbreak: social media surveillance in china. Risk Perception, Emotional and Behavioural Responses to the COVID-19 Outbreak: Social Media Surveillance in China (3/6/2020) (2020).
  • [6] Li, S., Wang, Y., Xue, J., Zhao, N. & Zhu, T. The impact of covid-19 epidemic declaration on psychological consequences: a study on active weibo users. International journal of environmental research and public health 17, 2032 (2020).
  • [7] Cinelli, M. et al. The covid-19 social media infodemic. arXiv preprint arXiv:2003.05004 (2020).
  • [8] Pulido, C. M., Villarejo-Carballido, B., Redondo-Sama, G. & Gómez, A. Covid-19 infodemic: More retweets for science-based information on coronavirus than for false information. International Sociology 0268580920914755 (2020).
  • [9] Ferrara, E. #covid-19 on twitter: Bots, conspiracies, and social media activism. arXiv preprint arXiv:2004.09531 (2020).
  • [10] Kouzy, R. et al. Coronavirus goes viral: quantifying the covid-19 misinformation epidemic on twitter. Cureus 12 (2020).
  • [11] Yang, K.-C., Torres-Lugo, C. & Menczer, F. Prevalence of low-credibility information on twitter during the covid-19 outbreak. arXiv preprint arXiv:2004.14484 (2020).
  • [12] Bento, A. I. et al. Evidence from internet search data shows information-seeking responses to news of local covid-19 cases. Proceedings of the National Academy of Sciences (2020).
  • [13] Wang, C. et al. Immediate psychological responses and associated factors during the initial stage of the 2019 coronavirus disease (covid-19) epidemic among the general population in china. International journal of environmental research and public health 17, 1729 (2020).
  • [14] Qiu, J. et al. A nationwide survey of psychological distress among chinese people in the covid-19 epidemic: implications and policy recommendations. General psychiatry 33 (2020).
  • [15] Kübler-Ross, E., Wessler, S. & Avioli, L. V. On death and dying. Jama 221, 174–179 (1972).
  • [16] Bansal, S., Chowell, G., Simonsen, L., Vespignani, A. & Viboud, C. Big data for infectious disease surveillance and modeling. The Journal of infectious diseases 214, S375–S379 (2016).
  • [17] Salathe, M. et al. Digital epidemiology. PLoS Comput Biol 8, e1002616 (2012).
  • [18] Bauch, C. T. & Galvani, A. P. Social factors in epidemiology. Science 342, 47–49 (2013).
  • [19] Bagnoli, F., Lio, P. & Sguanci, L. Risk perception in epidemic modeling. Physical Review E 76, 061904 (2007).
  • [20] Moinet, A., Pastor-Satorras, R. & Barrat, A. Effect of risk perception on epidemic spreading in temporal networks. Physical Review E 97, 012313 (2018).
  • [21] Coppersmith, G., Dredze, M. & Harman, C. Quantifying mental health signals in twitter. In Proceedings of the workshop on computational linguistics and clinical psychology: From linguistic signal to clinical reality, 51–60 (2014).
  • [22] Kahn, J. H., Tobin, R. M., Massey, A. E. & Anderson, J. A. Measuring emotional expression with the linguistic inquiry and word count. The American journal of psychology 263–286 (2007).
  • [23] Gill, A. J., French, R. M., Gergle, D. & Oberlander, J. The language of emotion in short blog texts. In Proceedings of the 2008 ACM conference on Computer supported cooperative work, 299–302 (2008).
  • [24] Shen, J. H. & Rudzicz, F. Detecting anxiety through reddit. In Proceedings of the Fourth Workshop on Computational Linguistics and Clinical Psychology—From Linguistic Signal to Clinical Reality, 58–65 (2017).
  • [25] Tausczik, Y. R. & Pennebaker, J. W. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology 29, 24–54 (2010).
  • [26] Deutsch, M. Trust and suspicion. Journal of conflict resolution 2, 265–279 (1958).
  • [27] Simms, T. et al.

    Detecting cognitive distortions through machine learning text analytics.

    In 2017 IEEE international conference on healthcare informatics (ICHI), 508–512 (IEEE, 2017).
  • [28] Shaw, B. et al. Effects of prayer and religious expression within computer support groups on women with breast cancer. Psycho-Oncology: Journal of the Psychological, Social and Behavioral Dimensions of Cancer 16, 676–687 (2007).
  • [29] Alpers, G. W. et al. Evaluation of computerized text analysis in an internet breast cancer support group. Computers in Human Behavior 21, 361–376 (2005).
  • [30] Wolf, M., Theis, F. & Kordy, H. Language use in eating disorder blogs: Psychological implications of social online activity. Journal of Language and Social Psychology 32, 212–226 (2013).
  • [31] Kornfield, R., Toma, C. L., Shah, D. V., Moon, T. J. & Gustafson, D. H. What do you say before you relapse? how language use in a peer-to-peer online discussion forum predicts risky drinking among those in recovery. Health communication 33, 1184–1193 (2018).
  • [32] Arguello, J. et al. Talk to me: foundations for successful individual-group interactions in online communities. In Proceedings of the SIGCHI conference on Human Factors in computing systems, 959–968 (2006).
  • [33] Figea, L., Kaati, L. & Scrivens, R. Measuring online affects in a white supremacy forum. In 2016 IEEE conference on intelligence and security informatics (ISI), 85–90 (IEEE, 2016).
  • [34] Borelli, J. L. & Sbarra, D. A. Trauma history and linguistic self-focus moderate the course of psychological adjustment to divorce. Journal of Social and Clinical Psychology 30, 667–698 (2011).
  • [35] Windsor, L. C., Dowell, N. & Graesser, A. The language of autocrats: Leaders’ language in natural disaster crises. Risk, Hazards & Crisis in Public Policy 5, 446–467 (2014).
  • [36] ElSherief, M., Kulkarni, V., Nguyen, D., Wang, W. Y. & Belding, E. Hate lingo: A target-based linguistic analysis of hate speech in social media. In Twelfth International AAAI Conference on Web and Social Media (2018).
  • [37] Graham, J., Haidt, J. & Nosek, B. A. Liberals and conservatives rely on different sets of moral foundations. Journal of personality and social psychology 96, 1029 (2009).
  • [38] Rezapour, R., Shah, S. H. & Diesner, J. Enhancing the measurement of social effects by capturing morality. In Proceedings of the Tenth Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 35–45 (2019).
  • [39] Gonzalez, M. C., Hidalgo, C. A. & Barabasi, A.-L. Understanding individual human mobility patterns. nature 453, 779–782 (2008).
  • [40] Goffman, E. Stigma: Notes on the management of spoiled identity (Simon and Schuster, 2009).
  • [41] Cap, P. The language of fear: Communicating threat in public discourse (Springer, 2016).
  • [42] Gibbs, G. R. Thematic coding and categorizing. Analyzing qualitative data. London: Sage 38–56 (2007).
  • [43] Mohammad, S. M. & Turney, P. D. Crowdsourcing a word–emotion association lexicon. Computational Intelligence 29, 436–465 (2013).
  • [44] Frimer, J. A., Schaefer, N. K. & Oakes, H. Moral actor, selfish agent. Journal of personality and social psychology 106, 790 (2014).
  • [45] Han, J., Pei, J. & Kamber, M. Data mining: concepts and techniques (Elsevier, 2011).
  • [46] Braun, V. & Clarke, V. Using thematic analysis in psychology. Qualitative research in psychology 3, 77–101 (2006).
  • [47] Smith, J. A. & Shinebourne, P. Interpretative phenomenological analysis. (American Psychological Association, 2012).
  • [48] With millions stuck at home, the online wellness industry is booming. URL https://www.economist.com/international/2020/04/04/with-millions-stuck-at-home-the-online-wellness-industry-is-booming.
  • [49] Fetzer, T., Hensel, L., Hermle, J. & Roth, C. Coronavirus perceptions and economic anxiety. Review of Economics and Statistics 1–36 (2020).
  • [50] Bareket-Bojmel, L., Shahar, G. & Margalit, M. Covid-19-related economic anxiety is as high as health anxiety: Findings from the usa, the uk, and israel. International Journal of Cognitive Therapy 1 (2020).
  • [51] Da, B. L., Im, G. Y. & Schiano, T. D. Covid-19 hangover: a rising tide of alcohol use disorder and alcohol-associated liver disease. Hepatology (2020).
  • [52] Finlay, I. & Gilmore, I. Covid-19 and alcohol—a dangerous cocktail (2020).
  • [53] Choi, M., Aiello, L. M., Varga, K. Z. & Quercia, D. Ten social dimensions of conversations and relationships. In Proceedings of The Web Conference, WWW (ACM, 2020).
  • [54] Scepanovic, S., Martin-Lopez, E., Quercia, D. & Baykaner, K. Extracting medical entities from social media. In Proceedings of the ACM Conference on Health, Inference, and Learning, 170–181 (2020).
  • [55] Galea, S., Merchant, R. M. & Lurie, N. The mental health consequences of covid-19 and physical distancing: The need for prevention and early intervention. JAMA internal medicine 180, 817–818 (2020).
  • [56] Liang, L. et al. The effect of covid-19 on youth mental health. Psychiatric Quarterly 1–12 (2020).
  • [57] Dutheil, F., Mondillon, L. & Navel, V. Ptsd as the second tsunami of the sars-cov-2 pandemic. Psychological Medicine 1–2 (2020).
  • [58] Maslow, A. H. A theory of human motivation. Psychological review 50, 370 (1943).
  • [59] Engle, S., Stromme, J. & Zhou, A. Staying at home: mobility effects of covid-19. Available at SSRN (2020).
  • [60] Gao, S., Rao, J., Kang, Y., Liang, Y. & Kruse, J. Mapping county-level mobility pattern changes in the united states in response to covid-19. SIGSPATIAL Special 12, 16–26 (2020).
  • [61] Holmes, E. A. et al. Multidisciplinary research priorities for the covid-19 pandemic: a call for action for mental health science. The Lancet Psychiatry (2020).
  • [62] Baronchelli, A. The emergence of consensus: a primer. Royal Society open science 5, 172189 (2018).
  • [63] Fu, K.-W. et al. How people react to zika virus outbreaks on twitter? a computational content analysis. American journal of infection control 44, 1700–1702 (2016).
  • [64] Oyeyemi, S. O., Gabarron, E. & Wynn, R. Ebola, twitter, and misinformation: a dangerous combination? Bmj 349, g6178 (2014).
  • [65] Chew, C. & Eysenbach, G. Pandemics in the age of twitter: content analysis of tweets during the 2009 h1n1 outbreak. PloS one 5 (2010).
  • [66] Shultz, J. M., Baingana, F. & Neria, Y. The 2014 ebola outbreak and mental health: current status and recommended response. Jama 313, 567–568 (2015).
  • [67] Painter, M. & Qiu, T. Political beliefs affect compliance with covid-19 social distancing orders. Available at SSRN 3569098 (2020).
  • [68] Grossman, G., Kim, S., Rexer, J. & Thirumurthy, H. Political partisanship influences behavioral responses to governors’ recommendations for covid-19 prevention in the united states. Available at SSRN 3578695 (2020).
  • [69] Hofstede, G. H., Hofstede, G. J. & Minkov, M. Cultures and organizations: Software of the mind, vol. 2 (Mcgraw-hill New York, 2005).
  • [70] Bleidorn, W. et al. Personality maturation around the world: A cross-cultural examination of social-investment theory. Psychological science 24, 2530–2540 (2013).
  • [71] Perrin, A. & Anderson, M. Share of u.s. adults using social media, including facebook, is mostly unchanged since 2018. URL https://www.pewresearch.org/fact-tank/2019/04/10/share-of-u-s-adults-using-social-media-including-facebook-is-mostly-unchanged-since-2018/.
  • [72] Li, L., Goodchild, M. F. & Xu, B. Spatial, temporal, and socioeconomic patterns in the use of twitter and flickr. Cartography and geographic information science 40, 61–77 (2013).
  • [73] Ferrara, E., Varol, O., Davis, C., Menczer, F. & Flammini, A. The rise of social bots. Communications of the ACM 59, 96–104 (2016).
  • [74] Varol, O., Ferrara, E., Davis, C. A., Menczer, F. & Flammini, A. Online human-bot interactions: Detection, estimation, and characterization. In Eleventh international AAAI conference on web and social media (2017).
  • [75] Bessi, A. & Ferrara, E. Social bots distort the 2016 us presidential election online discussion. First Monday 21 (2016).
  • [76] Broniatowski, D. A. et al. Weaponized health communication: Twitter bots and russian trolls amplify the vaccine debate. American journal of public health 108, 1378–1384 (2018).
  • [77] Husnayain, A., Fuad, A. & Su, E. C.-Y. Applications of google search trends for risk communication in infectious disease management: A case study of covid-19 outbreak in taiwan. International Journal of Infectious Diseases (2020).
  • [78] Chen, E., Lerman, K. & Ferrara, E. Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR Public Health Surveillance 6, e19273 (2020). URL http://publichealth.jmir.org/2020/2/e19273/. DOI 10.2196/19273.
  • [79] Dredze, M., Paul, M. J., Bergsma, S. & Tran, H. Carmen: A twitter geolocation system with applications to public health. In

    Workshops at the Twenty-Seventh AAAI Conference on Artificial Intelligence

    (2013).
  • [80] Plutchik, R. The emotions (University Press of America, 1991).
  • [81] Graham, J. et al. Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology, vol. 47, 55–130 (Elsevier, 2013).
  • [82] Russell, M. A. Mining the social web: data mining Facebook, Twitter, LinkedIn, Google+, GitHub, and more (” O’Reilly Media, Inc.”, 2013).
  • [83] Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural computation 9, 1735–1780 (1997).
  • [84] Pennington, J., Socher, R. & Manning, C.

    GloVe: Global vectors for word representation.

    In Proceedings of the conference on empirical methods in natural language processing, 1532–1543 (Association for Computational Linguistics, 2014).
  • [85] Deri, S., Rappaz, J., Aiello, L. M. & Quercia, D. Coloring in the links: Capturing social ties as they are perceived. In Proceedings of the ACM conference on Computer Supported Cooperative Work and Social Computing, CSCW, 1–18 (ACM, 2018).
  • [86] Tajfel, H., Turner, J. C., Austin, W. G. & Worchel, S. An integrative theory of intergroup conflict. Organizational Identity (1979).
  • [87] Fiske, S. T., Cuddy, A. J. & Glick, P. Universal Dimensions of Social Cognition: Warmth and Competence. Trends in cognitive sciences 11, 77–83 (2007).
  • [88] Blau, P. M. Exchange and Power in Social Life (Transaction Publishers, 1964).
  • [89] Huang, Z., Xu, W. & Yu, K. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015).
  • [90] Liu, Y. et al. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
  • [91] Jimeno-Yepes, A., MacKinlay, A., Han, B. & Chen, Q. Identifying diseases, drugs, and symptoms in twitter. Studies in Health Technology and Informatics 216, 643 (2015).
  • [92] Akbik, A. et al. FLAIR: An easy-to-use framework for state-of-the-art NLP. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics, 54–59 (2019).
  • [93] Paszke, A. et al. Automatic differentiation in PyTorch. In Proceedings of the Advances in Neural Information Processing Systems Autodiff Workshop (2017).
  • [94] O’Connor, B., Balasubramanyan, R., Routledge, B. R. & Smith, N. A. From tweets to polls: Linking text sentiment to public opinion time series. In Fourth international AAAI conference on weblogs and social media (2010).
  • [95] Lütkepohl, H. New introduction to multiple time series analysis (Springer Science & Business Media, 2005).
  • [96] Palshikar, G. et al. Simple algorithms for peak detection in time-series. In Proc. 1st Int. Conf. Advanced Data Analysis, Business Analytics and Intelligence, vol. 122 (2009).