Analyzing COVID-19 Tweets with Transformer-based Language Models

by   Philip Feldman, et al.

This paper describes a method for using Transformer-based Language Models (TLMs) to understand public opinion from social media posts. In this approach, we train a set of GPT models on several COVID-19 tweet corpora that reflect populations of users with distinctive views. We then use prompt-based queries to probe these models to reveal insights into the biases and opinions of the users. We demonstrate how this approach can be used to produce results which resemble polling the public on diverse social, political and public health issues. The results on the COVID-19 tweet data show that transformer language models are promising tools that can help us understand public opinions on social media at scale.



There are no comments yet.


page 1

page 2

page 3

page 4


Perception and Attitude of Reddit Users Towards Use of Face-Masks in Controlling COVID-19

In the wake of the COVID-19 pandemic, World Health Organization (WHO) re...

Your Tweets Matter: How Social Media Sentiments Associate with COVID-19 Vaccination Rates in the US

Objective: The aims of the study were to examine the association between...

Measuring Linguistic Diversity During COVID-19

Computational measures of linguistic diversity help us understand the li...

TweepFake: about Detecting Deepfake Tweets

The threat of deepfakes, synthetic, or manipulated media, is becoming in...

Simulation-Driven COVID-19 Epidemiological Modeling with Social Media

Modern Bayesian approaches and workflows emphasize in how simulation is ...

How social feedback processing in the brain shapes collective opinion processes in the era of social media

What are the mechanisms by which groups with certain opinions gain publi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.