Generating Fact Checking Briefs

11/10/2020
by   Angela Fan, et al.
0

Fact checking at scale is difficult – while the number of active fact checking websites is growing, it remains too small for the needs of the contemporary media ecosystem. However, despite good intentions, contributions from volunteers are often error-prone, and thus in practice restricted to claim detection. We investigate how to increase the accuracy and efficiency of fact checking by providing information about the claim before performing the check, in the form of natural language briefs. We investigate passage-based briefs, containing a relevant passage from Wikipedia, entity-centric ones consisting of Wikipedia pages of mentioned entities, and Question-Answering Briefs, with questions decomposing the claim, and their answers. To produce QABriefs, we develop QABriefer, a model that generates a set of questions conditioned on the claim, searches the web for evidence, and generates answers. To train its components, we introduce QABriefDataset which we collected via crowdsourcing. We show that fact checking with briefs – in particular QABriefs – increases the accuracy of crowdworkers by 10 For volunteer (unpaid) fact checkers, QABriefs slightly increase accuracy and reduce the time required by around 20

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset