XAI in Automated Fact-Checking? The Benefits Are Modest And There's No One-Explanation-Fits-All
Fact-checking is a popular countermeasure against misinformation but the massive volume of information online has spurred active research in the automation of the task. Like expert fact-checking, it is not enough for an automated fact-checker to just be accurate, but also be able to inform and convince the user of the validity of its prediction. This becomes viable with explainable artificial intelligence (XAI). In this work, we conduct a study of XAI fact-checkers involving 180 participants to determine how users' actions towards news and their attitudes towards explanations are affected by the XAI. Our results suggest that XAI has limited effects on users' agreement with the veracity prediction of the automated fact-checker and on their intents to share news. However, XAI does nudge them towards forming uniform judgments of news veracity, thereby signaling a reliance on the explanations. We also found polarizing preferences towards XAI, raising several design considerations on these.
READ FULL TEXT