The State of Human-centered NLP Technology for Fact-checking

by   Anubrata Das, et al.

Misinformation threatens modern society by promoting distrust in science, changing narratives in public health, heightening social polarization, and disrupting democratic elections and financial markets, among a myriad of other societal harms. To address this, a growing cadre of professional fact-checkers and journalists provide high-quality investigations into purported facts. However, these largely manual efforts have struggled to match the enormous scale of the problem. In response, a growing body of Natural Language Processing (NLP) technologies have been proposed for more scalable fact-checking. Despite tremendous growth in such research, however, practical adoption of NLP technologies for fact-checking still remains in its infancy today. In this work, we review the capabilities and limitations of the current NLP technologies for fact-checking. Our particular focus is to further chart the design space for how these technologies can be harnessed and refined in order to better meet the needs of human fact-checkers. To do so, we review key aspects of NLP-based fact-checking: task formulation, dataset construction, modeling, and human-centered strategies, such as explainable models and human-in-the-loop approaches. Next, we review the efficacy of applying NLP-based fact-checking tools to assist human fact-checkers. We recommend that future research include collaboration with fact-checker stakeholders early on in NLP research, as well as incorporation of human-centered design practices in model development, in order to further guide technology development for human use and practical adoption. Finally, we advocate for more research on benchmark development supporting extrinsic evaluation of human-centered fact-checking technologies.


page 1

page 2

page 3

page 4


Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI

A key challenge in professional fact-checking is its limited scalability...

Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for Misinformation

Misinformation emerges in times of uncertainty when credible information...

Evaluating User Experience in Literary and Film Geography-based Apps with a Cartographical User-Centered Design Lens

Geography scholarship currently includes interdisciplinary approaches an...

Automated, not Automatic: Needs and Practices in European Fact-checking Organizations as a basis for Designing Human-centered AI Systems

To mitigate the negative effects of false information more effectively, ...

Automated Fact Checking: Task formulations, methods and future directions

The recently increased focus on misinformation has stimulated research i...

Human-in-the-loop Evaluation for Early Misinformation Detection: A Case Study of COVID-19 Treatments

We present a human-in-the-loop evaluation framework for fact-checking no...

Librarian-in-the-Loop: A Natural Language Processing Paradigm for Detecting Informal Mentions of Research Data in Academic Literature

Data citations provide a foundation for studying research data impact. C...

Please sign up or login with your details

Forgot password? Click here to reset