Two Truths and a Lie: Exploring Soft Moderation of COVID-19 Misinformation with Amazon Alexa
In this paper, we analyzed the perceived accuracy of COVID-19 vaccine Tweets when they were spoken back by a third-party Amazon Alexa skill. We mimicked the soft moderation that Twitter applies to COVID-19 misinformation content in both forms of warning covers and warning tags to investigate whether the third-party skill could affect how and when users heed these warnings. The results from a 304-participant study suggest that the spoken back warning covers may not work as intended, even when converted from text to speech. We controlled for COVID-19 vaccination hesitancy and political leanings and found that the vaccination hesitant Alexa users ignored any type of warning as long as the Tweets align with their personal beliefs. The politically independent users trusted Alexa less then their politically-laden counterparts and that helped them accurately perceiving truthful COVID-19 information. We discuss soft moderation adaptations for voice assistants to achieve the intended effect of curbing COVID-19 misinformation.
READ FULL TEXT