CODA-19: Reliably Annotating Research Aspects on 10,000+ CORD-19 Abstracts Using Non-Expert Crowd
This paper introduces CODA-19, a human-annotated dataset that denotes the Background, Purpose, Method, Finding/Contribution, and Other for 10,966 English abstracts in the COVID-19 Open Research Dataset. CODA-19 was created by 248 crowd workers from Amazon Mechanical Turk collectively within ten days. Each abstract was annotated by nine different workers, and the final labels were obtained by majority voting. CODA-19's labels have an accuracy of 82 inter-annotator agreement (Cohen's kappa) of 0.74 when compared against expert labels on 129 abstracts. Reliable human annotations help scientists to understand the rapidly accelerating coronavirus literature and also serves as the battery of AI/NLP research. While obtaining expert annotations can be slow, CODA-19 demonstrated that non-expert crowd can be employed at scale rapidly to join the combat against COVID-19.
READ FULL TEXT