Exposure to Social Engagement Metrics Increases Vulnerability to Misinformation

05/10/2020
by   Mihai Avram, et al.
0

News feeds in virtually all social media platforms include engagement metrics, such as the number of times each post is liked and shared. We find that exposure to these social engagement signals increases the vulnerability of users to misinformation. This finding has important implications for the design of social media interactions in the misinformation age. To reduce the spread of misinformation, we call for technology platforms to rethink the display of social engagement metrics. Further research is needed to investigate whether and how engagement metrics can be presented without amplifying the spread of low-credibility information.

READ FULL TEXT VIEW PDF

Authors

page 2

page 4

page 7

04/15/2021

Exploring Visual Engagement Signals for Representation Learning

Visual engagement in social media platforms comprises interactions with ...
11/17/2020

The COVID19 infodemic. The role and place of academics in science communication

As the COVID19 pandemic has spread across the world, a concurrent pandem...
08/04/2021

Using Interaction Data to Predict Engagement with Interactive Media

Media is evolving from traditional linear narratives to personalised exp...
11/01/2017

Beautiful and damned. Combined effect of content quality and social ties on user engagement

User participation in online communities is driven by the intertwinement...
11/13/2021

A feast for trolls – Engagement analysis of counternarratives against online toxicity

This report provides an engagement analysis of counternarratives against...
02/05/2018

Spot that Bird: A Location Based Bird Game

In today's age of pervasive computing and social media people make exten...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Article’s lead

News feeds in virtually all social media platforms include engagement metrics, such as the number of times each post is liked and shared. We find that exposure to these social engagement signals increases the vulnerability of users to misinformation. This finding has important implications for the design of social media interactions in the misinformation age. To reduce the spread of misinformation, we call for technology platforms to rethink the display of social engagement metrics. Further research is needed to investigate whether and how engagement metrics can be presented without amplifying the spread of low-credibility information.

Research questions

  • What is the effect of exposure to social engagement metrics on people’s propensity to share content?

  • Does exposure to high engagement metrics increase the chances that people will like and share misinformation and/or make it less likely that people will engage in fact checking of low-credibility sources?

Essay summary

Figure 1: A news post in the social media feed simulated by the game.
  • We investigated the effect of social engagement metrics on the spread of misinformation using Fakey 111https://fakey.iuni.iu.edu/, a news literacy game that simulates a social media feed (Figure 1). The game presents users with actual current news articles from mainstream and low-credibility media sources. A randomly generated social engagement metric is displayed with each presented article. Users are instructed to share, like, fact check, or skip articles.

  • From a 19-month deployment of the game, we extracted 8,606 unique user sessions, mostly from the US, involving approximately 120,000 articles, half from low-credibility sources.

  • Our findings show that displayed engagement metrics can strongly influence interaction with misinformation. The higher the shown engagement, the more prone people were to share misinformation and less to fact check it.

  • These findings imply that social media platforms must rethink whether and how engagement metrics should be displayed such that they do not influence the spread of misinformation. Further research is needed to guard against malicious tampering with engagement metrics at an early stage and to design educational interventions that teach users to prioritize trustworthiness of news sources over social engagement metrics.

Implications

Online misinformation is a critical societal threat in the current digital age, and social media platforms are a major vehicle used to spread it guess2019less; lazer2018thescience; hameleers2020picture. As an illustration, the International Fact Checking Network found more than 3,500 false claims related to the coronavirus in less than 3.5 months.222https://poynter.org/coronavirusfactsalliance Viral misinformation can cause serious societal harm in multiple ways: affecting public health sharma2020coronavirus, influencing public policy lazer2018thescience, instigating violence arif2018acting; starbird2014rumors, spreading conspiracies samory2018conspiracies, reducing overall trust in authorities gupta2014tweetcred; shin2017partisan; vosoughi2018the, and increasing polarization and conflict stewart2018examining.

The growing societal impact of misinformation has driven research on technical solutions to detect and in some cases — depending on platform policies — stop actors that generate and spread such content. The techniques have leveraged network analytics Truthy_icwsm2011class; jin2013epidemiological, supervised models of automated behavior socialbots-CACM; botornot_icwsm17; yang2019arming; Yang2020botometer-lite; hui2019botslayer, time series analysis to detect promoted campaigns campaigns2017

, and natural language processing for flagging factually incorrect content 

prezrosas2017automatic; kumar2016disinformation. On the user interface (UI) side, researchers have explored the use of credibility indicators to flag misinformation and alert users clayton2019real. Such credibility indicators can lead to a reduction in sharing the flagged content yaqub2020effects; pennycook2019implied; pennycook2020fighting; nyhan2019taking.

However, there has been little empirical research on the effects of current elements of social media feeds on the spread of misinformation hameleers2020picture; shen2019fake. To address this gap, we empirically investigated how misinformation spread is affected by exposure to typical social engagement metrics, i.e., the numbers of likes and shares shown for a news article. We found a strong relationship between displayed social engagement metrics with user actions related to information from low-credibility sources. The higher the level of displayed engagement, the higher the chances that users liked/shared low-credibility articles and the lower the chances that they flagged those articles for fact checking. Our main contribution is the evidence that exposure to engagement metrics in social media feeds increases vulnerability to misinformation.

To interpret these findings, consider that the probability of sharing a piece of information grows with the number of times one is exposed to it, a phenomenon called

complex contagion romero2011differences; monsted2017evidence. Engagement metrics are proxies for multiple exposures, therefore they are intended to provide signals about the importance, relevance, and reliability of information — all of which contribute to people’s decisions to consume and share the information. In other words, being presented with high engagement metrics for an article mimics being exposed to the article multiple times: the brain is likely to assess that the article must be worthy of attention because many independent sources have validated the news article by liking or sharing it.

A key weakness in the cognitive processing of engagement metrics is the assumption of independence; if an entity can trick people by maliciously boosting engagement metrics to create the perception that many users interacted with an article. In fact, most disinformation campaigns rely on inauthentic social media accounts to tamper with engagement metrics, creating an initial appearance of virality that becomes reality once enough humans are deceived shao2018spread. To prevent misinformation amplified by fake accounts from going viral, we need sophisticated algorithms capable of early stage detection of coordinated behaviors that tamper with social engagement metrics hui2019botslayer; Yang2020botometer-lite; pacheco2020uncovering.

Our findings hold important implications for the design of social media platforms. Further research is needed to investigate how alternative designs of social engagement metrics could reduce their effect on misinformation sharing (e.g., by hiding or making engagement less visible for certain posts), without negatively impacting the sharing of legitimate and reliable content. A good trade-off between these two conflicting needs will require a systematic investigation of news properties that can help determine differential display of engagement metrics. Such properties may include the type of sources (e.g., whether claims originate from unknown/distrusted accounts or low-credibility sources) and the type of topics (e.g., highly sensitive or polarizing topics with a significant impact on society).

Further research is also needed to design literacy campaigns (such as Fakey 333https://fakey.iuni.iu.edu/, a news literacy game that simulates a social media feed) that teach users to prioritize trustworthiness of sources over engagement signals when consuming content on social media. Studies could investigate the possibility of introducing intermediary pauses when consuming news through a social media feed fazio2020pausing and limiting automated or high-speed sharing. A comprehensive literacy approach to reduce the vulnerability of social media users to misinformation may require a combination of these interventions with others, such as inoculation theory roozenbeek2020prebunking; roozenbeek2019fake; roozenbeek2019the; basol2020good, civic online reasoning mcgrew2020learning, critical thinking lutzke2019priming, and evaluation of news feeds nygren2019diversity.

Findings

Finding 1: High levels of social engagement results in lower fact checking and higher liking/sharing, especially for misinformation.

For each article shown in the game, the user is presented a photo, headline, description, and a randomly generated social engagement level. Based on this information, the user can share, like, or fact check the article (Figure 1). To earn points in the game, the user must share or like articles from mainstream sources and/or fact check articles from low-credibility sources. The social engagement level shown with the article provides an opportunity to investigate its effect on behaviors that result in the spread of misinformation.

We measured the correlation between the social engagement metric

displayed to users and the rates at which the corresponding articles from low-credibility sources were liked/shared or fact checked by the users. Given the realistically skewed distribution of

values, we sorted the data into logarithmic bins based on the shown social engagement levels. For each bin

, we calculated the liking/sharing and fact checking rates across articles and users. We measured correlation using the non-parametric Spearman test as the data is not normally distributed. We found a significant positive correlation between social engagement level and liking/sharing (Spearman

, ) and a significant negative correlation between social engagement level and fact checking (Spearman , ).

Based on Cohen’s standard cohen2014applied, we interpret our results as showing that social engagement metrics are doubly problematic for misinformation spread; high levels of engagement make it less likely that people will be careful about fact checking potential misinformation at the same time making it more likely that they will like or share it.

We found similar relationships between social engagement levels and user behaviors for mainstream news article as well, however the correlations are less strong: for liking/sharing and for fact checking.

Finding 2: People are more vulnerable to misinformation with high social engagement.

Figure 2:

Mean rates of liking/sharing and fact checking low-credibility articles, categorized by social engagement level (see text). Error bars represent the standard error rate.

The previous finding is at the population level, aggregating across users. To delve further into the effect of social engagement exposure on individual users, we analyzed whether different social engagement levels influenced each user’s liking/sharing and fact checking rates for articles from low-credibility sources. For this analysis, we treated each user as an independent entity and categorized engagement into three levels: low (), medium (), and high (). For each user, we determined the total number of articles with which they interacted along with the proportion of those articles that were from low-credibility sources and examined the corresponding like/share/fact check actions. We computed the misinformation liking/sharing rate by dividing the number of low-credibility articles shared or liked by the user at the given social engagement bin by the total number of low-credibility articles seen by the user with social engagement metrics within that bin. We used similar computations to calculate the misinformation fact checking rate for the user. Figure 2 plots the liking/sharing and fact checking rates for low-credibility articles. Although users were more likely to fact check than like or share misinformation, Figure 2 shows that the trends observed at the population level held at the individual level as well.

Since the data is not normally distributed (

using the Shapiro-Wilk test), we used the Kruskal-Wallis test to compare differences between the three bins of social engagement levels. The test revealed a statistically significant effect of social engagement levels: fact checking (

, ) and liking/sharing (, ) rates for low-credibility articles differed across the bins. To determine which levels of social engagement impacted the rates at which low-credibility articles were liked/shared or fact checked, we conducted post-hoc Mann-Whitney tests with Bonferroni correction for all pairs of social engagement bins and found that liking/sharing as well as fact checking rates were statistically significantly different across all pairings ().

We employed the same approach to examine liking/sharing and fact checking rates for mainstream articles across the three bins of social engagement levels. Similar to misinformation, the Kruskal-Wallis test revealed a statistically significant effect of social engagement level on liking/sharing (, ) and fact checking (, ) rates for mainstream articles.

In summary, higher values of displayed social engagement level made users less likely to fact check the content and more likely to like/share it. This finding shows that exposure to high levels of social engagement metrics makes social media users more vulnerable to spread content without verifying its veracity.

Methods

Social media simulation

To conduct our experiment investigating the effect of exposure to social engagement metrics on susceptibility to misinformation, we developed and deployed Fakey 444https://fakey.iuni.iu.edu/, an online news literacy game that simulates fact checking on a social media feed. The UI of the game mimics the appearance of Facebook or Twitter feeds for players who log into the game through those platforms. The game provides users with batches of ten news articles in the form of a news feed, as shown in Figure 1. Each article consists of elements that are typically displayed by popular social media platforms: photo, headline, description, and social engagement metrics.

For each article, the game displays a single social engagement metric about the combined number of shares and likes. Not having separate metrics for shares and likes decreases the cognitive workload for game players and simplifies analysis. Engagement values are randomly drawn from an approximately log-normal distribution with a maximum possible value (cutoff) of

. The distribution is such that roughly 69% of the articles would display engagement values and roughly 3% would display values . Although the simulated engagement in the game is not drawn from empirical data, the metric numbers shown have a heavy tail similar to those typically observed in social media vosoughi2018the.

Below each article is a set of action buttons to share, like, fact check, or skip the article or use a hint. Before playing the game, users are instructed that clicking Share is equivalent to endorsing an article and sharing it with the world, clicking Like is equivalent to endorsing the article, and clicking Fact Check signals that the article is not trusted. After playing one round of ten articles, users have the option to play another round or check a leader-board to compare their skill with other players.

Content selection

Each article in the game is selected from one of two types of news sources: mainstream and low credibility. For mainstream news, we manually selected 32 sources with a balance of moderate liberal, centrist, and moderate conservative views. Examples include: The New York Times and The Wall Street Journal. Current articles are provided by the News API.555https://newsapi.org The set of low-credibility sources was selected based on flagging by various reputable news and fact checking organizations shao2018spread; Shao2018anatomy. These sources tend to publish fake news, conspiracy stories, click bait, rumors, junk science, and other types of misinformation. The articles are provided by the Hoaxy API.666http://rapidapi.com/truthy/api/hoaxy

For each round, the game randomly selects five articles each from mainstream and low-credibility sources. The even mix is a limitation of the experiment because it is not representative of the proportion of misinformation to which social media users are exposed in the real world. Another limitation is the fact checking game setting that primes users to expect misinformation, thus potentially making it more likely to be spotted.

Data collection

The game is available online through a standard web interface and as a mobile app via the Google Play Store and the Apple App Store. The mobile app is available in English-speaking countries: United States, Canada, United Kingdom, and Australia. People from other countries can still play the game through the web interface.

The present analysis is based on data from a 19-month deployment of the game, between May 2018 and November 2019. During this period, we advertised the game through several channels, including social media (Twitter and Facebook), press releases, conferences, keynote presentations, and word of mouth. We recorded 8,606 unique user sessions involving approximately 120,000 news articles, approximately half of which from low-credibility sources. We did not collect demographic information, but we collected analytics from Google Analytics embedded within the game’s hosting service. Analytics indicate that participants originated from the United States (78%), Australia (8%), UK (4%), Canada (3%), Germany (3%), and Bulgaria (2%).

Acknowledgments

We are grateful to Alessandro Flammini for inspiring the social engagement exposure experiment and Chengcheng Shao and Clayton A. Davis for technical support during platform development.

Bibliography

Funding

M.A. and F.M. were supported in part by the Democracy Fund, Craig Newmark Philanthropies, and Knight Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests

The authors have no competing interests to declare.

Ethics

The mechanisms and procedures reported in this article were reviewed and approved by the Institutional Review Board (IRB) of the authors’ institution.