A Python Package to Detect Anti-Vaccine Users on Twitter

Vaccine hesitancy has a long history but has been recently driven by the anti-vaccine narratives shared online, which significantly degrades the efficacy of vaccination strategies, such as those for COVID-19. Despite broad agreement in the medical community about the safety and efficacy of available vaccines, a large number of social media users continue to be inundated with false information about vaccines and, partly because of this, became indecisive or unwilling to be vaccinated. The goal of this study is to better understand anti-vaccine sentiment, and work to reduce its impact, by developing a system capable of automatically identifying the users responsible for spreading anti-vaccine narratives. We introduce a publicly available Python package capable of analyzing Twitter profiles to assess how likely that profile is to spread anti-vaccine sentiment in the future. The software package is built using text embedding methods, neural networks, and automated dataset generation. It is trained on over one hundred thousand accounts and several million tweets. This model will help researchers and policy-makers understand anti-vaccine discussion and misinformation strategies, which can further help tailor targeted campaigns seeking to inform and debunk the harmful anti-vaccination myths currently being spread. Additionally, we leverage the data on such users to understand what are the moral and emotional characteristics of anti-vaccine spreaders.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/25/2020

Racism is a Virus: Anti-Asian Hate and Counterhate in Social Media during the COVID-19 Crisis

The spread of COVID-19 has sparked racism, hate, and xenophobia in socia...
09/15/2021

Predicting Anti-Asian Hateful Users on Twitter during COVID-19

We investigate predictors of anti-Asian hate among Twitter users through...
07/11/2021

"A Virus Has No Religion": Analyzing Islamophobia on Twitter During the COVID-19 Outbreak

The COVID-19 pandemic has disrupted people's lives driving them to act i...
03/03/2022

Automated clustering of COVID-19 anti-vaccine discourse on Twitter

Attitudes about vaccination have become more polarized; it is common to ...
03/07/2021

Improving Bayesian estimation of Vaccine Efficacy

A full Bayesian approach to the estimation of Vaccine Efficacy is presen...
06/18/2020

Combating Anti-Blackness in the AI Community

In response to a national and international awakening on the issues of a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Anti-science, and especially anti-vaccine, attitudes are present within a large and recently active minority [Germani2020, Murphy2021]. Anti-vaccine protesters are partly responsible for a significant resurgence of Measles and other viruses for which vaccines have existed for decades [Smith2017]. Vaccine hesitancy is especially problematic with COVID-19, which continues to be an epidemic, especially within the United States, due in part to individuals not socially distancing, wearing masks, and becoming vaccinated, despite the advice of the medical community. The rapid spread of anti-science conspiracy theories and polarization online is one reason behind these attitudes [Druckman2020, Rao2020]. In this study, we create an algorithm we call AVAXTAR that automatically evaluates how much a Twitter account is prone to spreading anti-vaccine narratives. More specifically, our method can evaluate whether a Twitter account will spread specific anti-vaccine hashtags up to a year in advance. This code is freely available as a Python package: https://github.com/Matheus-Schmitz/avaxtar

. It takes a Twitter screen name as an input, retrieves that account’s recent activity, then computes and returns the probability that user will display Anti-Vaccine sentiment in the future. A Sent2Vec text embedding

[Pagliardini2017]

model, pre-trained on Wikipedia unigrams, is used to generate a feature vector for users, which are split into two groups labeled as “anti-vaccination” and “not anti-vaccination” (users who do not suggest they are anti-vaccine). That data is used to train a neural network that learns how social media messages are associated with each type of user. Twitter is explored in this paper because it is a popular social media website with an ongoing problem of anti-science rhetoric

[Rao2020].

Additionally, we leverage the dataset gathered for model training and explore the textual differences between the tweets posted by users who spread the anti-vaccine narratives and tweets posted by users who are not engaged in the anti-vaccine rhetoric. This analysis provides clues about the underlying reasons for anti-vaccine sentiment as well as the rhetorical devices people use to spread misinformation. Overall, the presented work provides a new method to understand the recent uptick in anti-vaccine sentiment by identifying users prone to disseminating such messages. This work also offers a way for researchers and public policy experts to devise information campaigns targeted precisely at the users driving a continuation of the pandemic.

2 Methods

2.1 Data Collection

The AVAXTARclassifier is trained on a comprehensive labeled dataset that contains historical tweets of approximately 130K Twitter accounts. Each account from the dataset was assigned one out of two labels: positive for the accounts that actively spread anti-vaccination narrative () and negative for the accounts that do not spread anti vaccination narrative (). By leveraging the Twitter’s Academic Research Product Track, we were able to access the full archival search and overcome the limit of 3,200 historical tweets of the standard API. This way we collect almost all historical tweets of most queried accounts.111For a small fraction of accounts that are highly active (and most likely automated), we interrupted the collection prematurely, due to Twitter’s API limitations. Sample tweets from users belonging to each class are shown in table 1.

Account 1 Account 2
Tweet 1 As first runner-up to my esteemed @StarTrek colleague @levarburton [when we appeared on #TheWeakestLink ], I would be honored to try my hand as #Jeopardy guest host. My experience as a science presenter for @exploreplanets emboldens me to #boldlygo ! Even with the inflated (for scaremongering purposes I can only assume) figure of 126k people who died WITH (not OF remember) covid19, that would mean that in a whole year this ”killer virus” hasnt even managed to kill 0.19% of almost 68 million people in the UK. ”pandemic”
Tweet 2 @SpaceX is daring some mighty things. To the stars! #NoVaccinePassportsAnywhere #NoVaccinePassportAnywhere #NoVaccinePassports #NoVaccinePassport #novaccinatingthechildren
Tweet 3 Congratulations to all at Blue Origin. Nicely done! It’s a new week, and no better a time to remind @nadhimzahawi that he’s a disgusting, two-faced parasite whose name will forever be synonymous with lies, corruption and bloodshed. Please help him get the message
Tweet 4 We visited Virgin Galactic back in 2018. Flew the simulator. Looked like it was going to fly well. And it did. Congratulations to All! #NoVaccinePassportsAnywhere #NoVaccinePassports #MedicalApartheid #wedonotconsent Really handy website to contact your MP directly…
Anti-Vaccine Probability 0.0705813 0.99880254
Table 1: Sample tweets from each class

Collecting positive samples. The samples labeled positive come from an existing dataset of anti-vaccine Twitter accounts and their respective tweets, which was published by Muric et al., (Muric2021). The authors first used a snowball method to identify a set of hashtags and keywords associated with the anti-vaccination movement, and then queried the Twitter API and collected the historical tweets of accounts that used any of the identified keywords. This way we collected more than 135 million tweets from more than 70 thousand accounts.

Collecting negative samples. To collect the negative samples, we first performed a similar approach to Muric et al., (Muric2021) and queried the Twitter API to get historical tweets of accounts that do not use any of the predefined keywords and hashtags. That way we collected the tweets of accounts that do not spread anti-vaccination narratives and/or are impartial about the topic. By using the negative samples that most likely represent an average Twitter user, we are likely to train a model able to differentiate between two groups of users solely based on topics or vocabulary used. To avoid that, we enlarge the number of negative samples, by gathering the tweets from accounts that are likely proponents of the vaccination. We identify the proponents of the vaccines in the following way: First, we identify the set of twenty most prominent doctors and health experts active on Twitter. Then, we manually collected the URLs of Lists222Twitter Lists allow users to customize, organize and prioritize the tweets they see in their timelines. Users can choose to join Lists created by others on Twitter. of those health experts they made on Twitter. We specifically searched for lists with names such as ”coronavirus experts” or ”epidemiologists”. From those lists, we collected approximately one thousand Twitter handles of prominent experts and doctors who tweet about the coronavirus and the pandemic. In the next step, we go through their latest 200 tweets and collected the Twitter handles of users who retweeted their tweets. That became our pool of pro-vaccine users. The users who retweeted many distinct experts were more likely to be included than users who retweeted a few. Finally, we collected the historical tweets of users from the pro-vaccine pool. This way we collected more than 50 million tweets from more than 30 thousand accounts that are most likely pro-vaccine, therefore accounts and more than 100 million tweets are gathered from users with a negative label.

2.2 Classification System

Generating Training Dataset. For each account that is labeled positive we identify its labeling date as the first date in which the account published a tweet that contained one of the predefined anti-vaccination hashtags defined in Muric et al., (Muric2021). For the negative user group, their labeling date was the date of their most recent tweet. All tweets from the 15 months proir to that date were considered, with samples being created using increasingly 90-day time windows. For each user we construct seven samples using the following time windows measured in days prior to the labeling date: [0-90), [60-150), [120-210), [180-270), [240-330), [300-390), [360-450). Time windows where the user published less than 100 tweets are ignored, to avoid generating high noise samples that could hamper model training. For each time window, all tweets from a given user were merged into a single document. The samples were then fed to a pre-trained Sent2Vec [Pagliardini2017] sentence embedding model, and a 600 dimension feature vector was obtained for each sample.

Feature Extraction. The Twitter API provides a standard output containing a variety of data and metadata for each tweet. Thus, many more potentially useful tweet features are obtained, which are then used to generate several engineered features in attempt to improve the predictive model. To construct engineered features, we considered factors such as the count and share of tweets, retweets, replies and quotes; the median number of favorites, retweets, replies and quotes that an user’s publications receive; the number of days in which the user made a publication; whether the user’s account is verified; the average sentiment (positive or negative) of the users posts, obtained with the python package vaderSentiment [Hutto2014]; the number and percentage of total tweets from an user which are retweets of prominent anti-vaccination users; and lastly, the number of times a user shared an URL to websites considered “Conspiracy Pseudoscience,” “Questionable Sources,” or “Pro Science” according to Media Bias/Fact Check (mediabiasfactcheck.com/), a website that rates media outlets on their factual accuracy and political leaning. These features were generated using the same sliding window procedure described above. The model was then trained with embeddings plus engineered features, embeddings only and engineered features only, and performance analysis revealed all engineered features to have negligible impact on accuracy, F1-Score, ROC-AUC, and PRC-AUC when used alongside the 600-dimensional embeddings. Based on those results the engineered features were dropped from the model, and the final model thus utilized only textual embeddings.

Training a Classifier. The resulting training dataset with

samples each user embedded as 600-dimensional feature vector was used to train the feed-forward neural network. After fine tuning the architecture and hyper-parameters, the final neural network consists of three layers: 1) Fully connected 600-neuron layer, 2) Fully connected 300-neuron layer and 3) Fully connected 150-neuron layer. In between layers a 40% dropout rate was applied. We used hyperbolic tangent activation between the layers and a softmax activation to generate prediction confidences. The batch size was 128, binary cross-entropy is used as loss function, and the optimizer is Adaptive Moment Estimation with Weight Decay (adamw) 

[AdamW].

Metric Negative class Positive class
Accuracy 0.8680 0.8680
ROC-AUC 0.9270 0.9270
PRC-AUC 0.8427 0.9677
Precision 0.8675 0.8675
Recall 0.8680 0.8680
F1 0.8677 0.8678
Table 2: Classifier evaluation scores on a test set

After model training, we identify the optimal classification threshold to be used, based on maximizing F1 score on the validation set. A Confusion Matrix and F1-Score analysis are shown in Figure 

1. We find that a threshold of 0.5938 results in the best F1 Score, and thus recommend the usage of that threshold instead of the default threshold of 0.5. Using the optimized threshold, the resulting model was then evaluated on a test set of users, achieving the reasonable scores, as shown in Table 2.

Figure 1: Upper: Confusion Matrix before and after threshold optimization; Lower: Relation between classification threshold and F1-Score. Optimal threshold for the highest F1-Score is 0.5938.

Python Package. The trained neural network was bundled alongside a script that automates the fetching of the relevant data from the Twitter API. The code was then packaged alongside auxiliary scripts and published to GitHub under the acronym AVAXTAR: Anti-VAXx Tweet AnalyzeR 1.0 and is accessible on GitHub: https://github.com/Matheus-Schmitz/avaxtar. The package abstracts all the feature generation and data manipulation aspects of the task, requiring the user to enter only their Twitter credentials (required for fetching data), alongside a target user’s screen name or user id. The provided output consists of a set of complimentary probabilities, for the user belonging to either the ”not anti-vaccine” class (0) and to the ”anti-vaccine” class (1).

3 Data Analysis

A model based on sentence embedding does not necessarily provide insights into the differences between anti-vaccine users and all others. To understand what sets these users apart, we analyze the differences in their text. We first analyze the relative popularity of words used by members of each group, which is shown in Figure 2

; axes are in log scale and we plot the most common words in each class. We find that that, at least among the highest frequency words, the main topic of discourse among Anti-Vaccine users is not vaccination itself, but rather politics in general, with both Trump and Biden as well as ”democrat”, ”fraud” and ”patriot” among the words whose usage skews the most towards the Anti-Vaccine group. The Not Anti-Vaccine on the other hand does have COVID-19 and vaccination-related words among its most frequently used words. This is possibly a result of how the Not Anti-Vaccine group is formed, with half of its samples being random twitter users, and the other half being those who interact with pro-vaccination experts, the latter group being more prone to active engagement in conversation in the vaccination topic.

Figure 2: Most frequently used words.

Using the NRC Lexicon 

[MohammadT13], a sentiment and emotion lexicon, the average sentiment and emotion of tweets published by each class is displayed in Fig 3. The anti-vaccination users lean towards a negative emotion, including displaying greater anger, disgust, fear, surprise, and trust. These users simultaneously have slightly lower sadness, anticipation, joy, and lower positive sentiment. A Mann-Whitney U Test revealed all features for both emotions and sentiments to have a statistically significant between-class difference in their distributions (p-value ).

Figure 3: Emotion and Sentiment of tweets from Anti-Vaccination and Non-Anti-Vaccination accounts, based on NRC Lexicon

An analysis of the Moral Framing associated with each group, based on the Modal Foundations Theory, was performed leveraging the Moral Foundations FrameAxis 333https://github.com/negar-mokhberian/Moral_Foundation_FrameAxis [Mokhberian2020] and is presented on figure 4. Moral foundations are five dimensions, loyalty, care, sanctity, authority, and fairness, used across cultures to determine morality [MoralFoundations]. Determining the morals in text is difficult, but one method to address this is by using word embeddings [Mokhberian2020], and based on FrameAxis [FrameAxis]. We analyze two metrics from this method called bias and intensity. Bias tells us whether words tend to be associated with a positive or negative aspect of a moral dimension. For example, a highly positive loyalty bias means that a user is using words that are likely associated with being loyal rather than rebellious. Intensity tells us how prominently a particular moral dimension is used. A low intensity suggests words are not strongly associated with a particular moral dimension while high intensity suggests words are strongly associated (either positively or negatively) with a moral dimension. For each metric and moral dimension, Mann-Whitney U Tests reveal that apart from Intensity Loyalty, the median values of each metric are statistically significant different between anti-vaccine and regular users (p-value ). More specifically, we find that anti-vaccine users have a lower positive bias and intensity in loyalty, care, authority, and fairness bias, but slightly higher bias, and similarly more intense sanctity dimension (Figure 4). This overall points to a lower focus by anti-vaccine users on positive morals, and less focus on most morals, which provides some support for anti-vaccine users being more anti-authority and anti-loyalty, and focusing less on care, while focusing on sanctity, which are morals like ”purity,” ”immaculate,” and ”clean.”

Figure 4: Moral Foundations of anti-vaccine and regular users [MoralFoundations]. (a) Bias (the tendency of words to promote positive or negative aspects of each moral foundation) are typically less positive in anti-vaccine users. (b) Intensity (the focus of words towards particular moral foundations) is also typically lower for anti-vaccine users.

4 Discussion

Overall, AVAXTAR delivers a fast and accurate method to classify users by their vaccination attitudes, which provides insights into the differences between these users. We namely find anti-vaccine users are more negative and angrier, show a greater focus on politics (with common words like “ballot” and “trump”). Finally, analysis of user morals, based on Moral Foundation Theory, shows less positive pro-authority or pro-loyalty bias, and greater focus on sanctity and purity, perhaps because they associate vaccination with uncleanliness.

4.1 Limitations

The present algorithm utilizes a dataset by Muric et al. [Muric2021] as positive class samples. These data consist of automatically annotated anti-vaccine labels on Twitter accounts, via the usage of hashtags on published tweets. This method may generate both false positives and false negatives, which would then subsequently impact the accuracy of any model trained on the mislabeled data points. False positives can occur when a user does not display anti-vaccine sentiment, but made a publication including one of its associated hashtags, which could be due to typos, irony, or other reasons. False negatives can occur when a user clearly displays anti-vaccination sentiment, but happens not to use any of the hashtags employed in filtering for vaccine hesitant accounts. Since that same set of hashtags is used as negative filter in the negative class samples, such an user could end up being incorrectly included as a “not anti-vaccine” user.

4.2 Future Work

Correctly identifying which social media users propagate anti-vaccination sentiment is but one of the steps necessary to halting the current misinformation surge. It is therefore important to devise a science-based information campaign that targets vaccine-hesitant users, with the goal of halting the spread of misinformation. Another important area of research is to predict which users are susceptible to anti-vaccine misinformation. This could be accomplished using data on the social media content a given user is viewing and interacting with, along with the existing data on user posts, as these combined data would allow researchers to understand what media consumption habits predicate a user joining the anti-vaccine movement.

4.3 Acknowledgements

Funding for this work is provided through the ISI Keston Exploratory Research Award.

4.4 Conflicts of Interest

The authors declare no conflicts of interest.

References