Due to the global COVID-19 epidemic and the rapid changes in the epidemic, citizens are highly interested in learning about the latest news, which covers various domains, including directly related news such as treatment and sanitation policies and also side effects on education, economy, and so on. Meanwhile, citizens would pay extra attention to global related news now, not only because the planet has been brought together by the pandemic, but also because they can learn from the news of other countries to obtain first-hand news. For example, the epidemic outbreak in Korea is one month earlier than in Japan. Japanese citizens could prepare better for the epidemic if they had obtained more information from Korea. Citizens could learn from Asian countries about the efficiency of masks before local official guidance. Universities can learn about how to arrange virtual courses from the experience of other countries. Thus, a citizen-friendly international news system with topic detection would be helpful.
There are three challenges for building such a system compared with systems focusing on one language and one topic dong2020interactive; thorlund2020real:
The reliability of news sources.
Translation quality to the local language.
Topic classification for efficient searching.
The interface and the construction process of the worldwide COVID-19 information aggregation system are shown in Figure 1
. We first construct a robust multilingual reliable website collection solver via crowdsourcing with native workers for collecting reliable websites. We crawl news articles base on them and filter out the irrelevant. A high-quality machine translation system is then exploited to translate the articles into the local language (i.e., Japanese). The translated news are grouped into their corresponding topics by a BERT-based topic classifier. Our classifier achieves 0.84 F-score when classifying whether an article is about COVID-19 and substantially outperforms the keyword-based model by a large margin. In the end, all the translated and topic labeled news is demonstrated via a user-friendly web interface.
We present the pipeline for building the worldwide COVID-19 information aggregation system, focusing on the three solutions to the challenges.
|www.cdc.gov||US||True||The site is a government website, specifically the Center for Disease Control.||infection status prevention and emergency declaration symptoms, medical treatment and tests|
|www.covid19-yamanaka.com||Japan||False||Shinya Yamanaka is a famous medical researcher and his insights about COVID-19 are reliable.||prevention and emergency declaration|
|www.internazionale.it||Italy||False||This website collects and translates articles from news agencies and magazines from all over the world. Has up-to-date news, but also long-form analysis articles. Most of my deeper information comes from here.||infection status economics and welfare prevention and emergency declaration school and online classes|
|covid.saude.gov.br||Brazil||True||This site is the goverment web site.||infection status|
2.1 Reliable Website Collection
To avoid rumors and obtain high-quality, reliable information, it is essential to limit the information sources. Since we aim to create a multilingual system, the first challenge is to obtain a list of reliable information providers from different countries and in different languages.
Crowdsourcing is known to be efficient in creating high-quality datasets (behnke2018improving). To collect the list of reliable websites of a specific country, we use multiple crowdsourcing services (e.g., Crowd4U222https://crowd4u.org/, Amazon Mechanical Turk 333https://www.mturk.com/, Yahoo! Crowdsourcing444https://crowdsourcing.yahoo.co.jp, Tencent wenjuan555https://wj.qq.com) and limit the workers’ nationality because we assume that local citizens of each country know the reliable websites in their country. The workers not only suggest websites they think are reliable but they must also justify their choices and give a list of related topics they address, similar to constructing support for rumor detection gorrell-etal-2019-semeval; derczynski-etal-2017-semeval.
We decided to use eight countries of interest, including India, the United States, Italy, Japan, Spain, France, Germany, and Brazil. For other countries or regions such as China and Korea, reliable websites are provided by international students from these areas.
We treat official news from the governments as primary information sources and reliable newspapers as secondary information sources. We counted how many times each website was mentioned by the crowdworkers and found that the primary information sources tend to be ranked at the top three in each country. So we mainly crawl articles from primary sources.
Table 1 shows examples of the crowdsourcing results. The workers provide websites indicating for each one whether it is a primary or a secondary source, what are the reasons to choose this particular website, and which topics are addressed by the website. These topics are selected from a list that includes eight topics (e.g., Infection status, Economics and welfare, School and online classes).
2.2 Crawl, Filter and Translation for Information Localization
We crawl articles from 35 most reliable websites everyday by accessing the entry page and jumping to urls inside it recursively.
The number of crawled web pages is too big and exceeds the translation capacity. We consider only the most relevant pages by filtering using keywords such as COVID
. We can focus on pages with a higher probability to be COVID-19 related.
We use neural machine translation model TexTra666https://mt-auto-minhon-mlt.ucri.jgn-x.jp/ with self-attention mechanism (bahdanau2014neural; vaswani2017attention). The translation system provides high-quality translation from news articles in multiple languages into articles in Japanese. The translation capacity is approximately 1,000 articles per day.
2.3 Topic Classification
To perform topic classification, we first collect the dataset via crowdsourcing. The topic labels are annotated to a subset of articles. Then we train a topic-classification model to label further articles automatically.
2.3.1 Crowdsourcing Annotation for Topic Classification
All articles are in Japanese after the translation stage, we then apply crowdsourcing annotation to label the articles with topics. As shown in Figure 2, the crowdsourcing workers first check the content of the page and give four labels to the article: whether it is related to COVID-19, whether it is helpful, whether the translated Japanese is fluent, and topics of the article.
Each article is assigned to 10 crowdworkers from Yahoo Crowdsourcing and we set a threshold to for each binary question, i.e., if more than 5 workers think the article is related to COVID-19, then we label the article as . We post this crowdsourcing task twice a week and can obtain 20K article-topic pairs each time.
|Country||Article with topic label|
2.3.2 Automatic Topic Classifier
The pretrained language model BERT (DBLP:journals/corr/abs-1810-04805) shows reliable performance on many NLP tasks with limited annotated data including document classification (adhikari2019docbert; sun2019fine). We use a pretrained BERT model in a feature based manner lee2019would where encoder weights kept frozen and train a classifier using the labeled articles by crowdsourcing. The BERT-based topic classification can then label other pages.
We also compare it with a keyword-based baseline method where we set keywords for each topic and find exact match.
|Task||Keyword-based model||BERT-based model|
|Is about COVID-19||0.36||1.00||0.54||0.82||0.87||0.84|
|Topic: Infection status||0.09||0.53||0.16||0.43||0.81||0.56|
|Topic: Medical information||0.17||0.70||0.27||0.27||0.91||0.41|
|Topic: Art and Sport||0.06||0.41||0.10||0.08||0.94||0.14|
We show the topic classification result and statistical information of the interface in this section.
3.1 Reliable Website Collection
As shown in Table 2, we totally recieved 908 questionnaire results from 8 countries with totally 550 websites. Rumors are rampant in this era, the reliable websites dataset can help people to protect themselves from COVID-19 and avoid trusting rumors about COVID-19.
3.2 Topic Classification
We compared the BERT-based model with the keyword-based baseline model on topic classification task.
For the keyword-based method, there are totally 76 selected keywords of different topics such as COVID, Remote work, and Social distance.
For the BERT-based method, we use the pre-trained BERT-LARGE model with Whole Word Masking (WWM) 777http://nlp.ist.i.kyoto-u.ac.jp/
. We add one linear layer after the BERT encoder without fine-tuning the encoder. For every article, we take the hidden state of the ending symbol of each sentence as the sentence embedding and perform mean and max pooling of all sentence embeddings. The input of the linear layer is the concatenation of mean and max pooling embeddings and the output is a binary label. We randomly selectdata from labeled data by crowdsourcing shown in Table 3 as a train set and remaining as a test set.
As shown in Table 4, the BERT-based model outperforms the baseline model in almost all tasks. We can see that our system can reliably classify which articles are related to COVID-19, and that our interface can show related news to our users. Meanwhile, for some topic such as Arts & Sports and Education, the performance of the current system is still limited, which could be improved in future work.
3.3 Statistics of the System
The detail of the system database is shown in Table 5. There are totally 1.05M website pages with 110K of them translated into Japanese and 18K of them with topic labels. The dataset is still growing approximately 11K pages per day.
We built a system for worldwide COVID-19 information aggregation by combining crowdsourcing, crawling, machine translation, and a BERT-based topic classifier, which provides reliable, comprehensive and latest information from the world.