Detecting fake news for the new coronavirus by reasoning on the Covid-19 ontology

04/26/2020 ∙ by Adrian Groza, et al. ∙ UTCluj 0

In the context of the Covid-19 pandemic, many were quick to spread deceptive information. I investigate here how reasoning in Description Logics (DLs) can detect inconsistencies between trusted medical sources and not trusted ones. The not-trusted information comes in natural language (e.g. "Covid-19 affects only the elderly"). To automatically convert into DLs, I used the FRED converter. Reasoning in Description Logics is then performed with the Racer tool.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the context of Covid-19 pandemic, many were quick to spread deceptive information [7]. Fighting against misinformation requires tools from various domains like law, education, and also from information technology [21, 16].

Since there is a lot of trusted medical knowledge already formalised, I investigate here how an ontology on Covid-19 could be used to signal fake news.

I investigate here how reasoning in description logic can detect inconsistencies between a trusted medical source and not trusted ones. The not-trusted information comes in natural language (e.g. ”Covid-19 affects only elderly”). To automatically convert into description logic (DL), I used the FRED converter [12]. Reasoning in Description Logics is then performed with the Racer reasoner [15]111The Python sources and the formalisation in Description Logics (KRSS syntax) are available at

The rest of the paper is organised as follows: Section 2 succinctly introduces the syntax of description logic and shows how inconsistency can be detected by reasoning. Section 4 analyses FRED translations for the Covid-19 myths. Section 5 illustrates how to formalise knowledge patterns for automatic conflict detection. Section 6 browses related work, while section 7 concludes the paper.

2 Finding inconsistencies using Description Logics

2.1 Description Logics

In the Description Logics, concepts are built using the set of constructors formed by negation, conjunction, disjunction, value restriction, and existential restriction [4] (Table 1). Here, and represent concept descriptions, while is a role name. The semantics is defined based on an interpretation , where the domain of contains a non-empty set of individuals, and the interpretation function maps each concept name to a set of individuals and each role to a binary relation . The last column of Table 1 shows the extension of for non-atomic concepts.

Constructor Syntax Semantics
existential restriction
value restriction
individual assertion
role assertion
Table 1: Syntax and semantics of DL

A terminology is a finite set of terminological axioms of the forms C D or C D.

Example 1 (Terminological box)

”Coronavirus disease (Covid-19 ) is an infectious disease caused by a newly discovered coronavirus” can be formalised as:


Here the concept Covid-19 is the same as the concept . We know that an infectious disease is a disease (i.e. the concept is included in the more general concept ). We also learn from (3) that the coronovirus disease in included the intersection of two sets: the set and the set of individuals for which all the roles points towards instances from the concept .

An assertional box is a finite set of concept assertions or role assertions r(i,j), where designates a concept, a role, and and are two individuals.

Example 2 (Assertional Box)

says that the individual SARS-CoV-2 is an instance of the concept . formalises the information that SARS-Cov-2 comes from the bats. Here the role relates two individuals SARS-CoV-2 and that is an instance of mammals (i.e. ).

A concept is satisfied if there exists an interpretation such that . The concept subsumes the concept () if for all interpretations . Constraints on concepts (i.e. disjoint) or on roles (domain, range, inverse role, or transitive properties) can be specified in more expressive description logics222I provide only some basic terminologies of description logics in this paper to make it self-contained. For a detailed explanation about families of Description Logics, the reader is referred to [4].. By reasoning on this mathematical constraints, one can detect inconsistencies among different pieces of knowledge, as illustrated in the following inconsistency patterns.

2.2 Inconsistency patterns

An ontology is incoherent iff there exists an unsatisfiable concept in .

Example 3 (Incoherent ontology)

is incoherent because is unsatisfiable in since it included to two disjoint sets.

In most of the cases, reasoning is required to signal that a concept is includes in two disjoint concepts.

Example 4 (Reasoning to detect incoherence)

From axioms 6 and 7, one can deduce that Covid-19 is included in the concept . From axiom 8, one learns the opposite: Covid-19 is outside the same set . A reasoner on Description Logics will signal an incoherence.

An ontology is inconsistent when an unsatisfiable concept is instantiated. For instance, inconsistency occurs when the same individual is an instance of two disjoint concepts

Example 5 (Inconsistent ontology)

We learn that SARS-CoV-2 is an instance of both and concepts. Axiom (8) states the viruses are disjoint of bacteria. A reasoner on Description Logics will signal an inconsistency.

Two more examples of such antipatterns333There are more such antipatterns [18] that trigger both incoherence and inconsistency. are:

Antipattern 1 (Onlyness Is Loneliness - OIL)

Here, concept can only be linked with role to . Next, can only be linked with role to , disjoint with .

Example 6 (OIL antipattern)
Antipattern 2 (Universal Existence - UE)

Axiom adds an existential restriction for the concept conflicting with the existence of an universal restriction for the same concept in .

Example 7 (UE antipattern)

Assume that axioms and comes from a trusted source, while axiom from the social web. By combining all three axioms, a reasoner will signal the inconsistency or incoherence. The technical difficulty is that information from social web comes in natural language.

3 Analysing misconceptions with Covid-19 ontology

Myth Fact
5G mobile networks spread Covid-19 Viruses can not travel on radio waves/mobile networks
Exposing yourself to the sun or to temperatures higher than 25C degrees prevents the coronavirus disease You can catch Covid-19 , no matter how sunny or hot the weather is
You can not recover from the coronavirus infection Most of the people who catch Covid-19 can recover and eliminate the virus from their bodies.
Covid-19 can not be transmitted in areas with hot and humid climates Covid-19 can be transmitted in all areas
Drinking excessive amounts of water can flush out the virus Drinking excessive amounts of water can not flush out the virus
Regularly rinsing your nose with saline help prevent infection with Covid-19 There is no evidence that regularly rinsing the nose with saline has protected people from infection with the new coronavirus
Eating raw ginger counters the coronavirus There is no evidence that eating garlic has protected people from the new coronavirus
The new coronavirus can be spread by Chinese food The new coronavirus can not be transmitted through food
Hand dryers are effective in killing the new coronavirus Hand dryers are not effective in killing the 2019-nCoV
Cold weather and snow can kill the new coronavirus Cold weather and snow can not kill the new coronavirus
Taking a hot bath prevents the new coronavirus disease Taking a hot bath will not prevent from catching Covid-19
Ultraviolet disinfection lamp kills the new coronavirus Ultraviolet lamps should not be used to sterilize hands or other areas of skin as UV radiation can cause skin irritation
Spraying alcohol or chlorine all over your body kills the new coronavirus Spraying alcohol or chlorine all over your body will not kill viruses that have already entered your body
Vaccines against pneumonia protect against the new coronavirus Vaccines against pneumonia, such as pneumococcal vaccine and Haemophilus influenza type B (Hib) vaccine, do not provide protection against the new coronavirus
Antibiotics are effective in preventing and treating the new coronavirus Antibiotics do not work against viruses, only bacteria.
High dose of Vitamin C heals Covid-19 No supplement cures or prevents disease
The pets transmit the Coronavirus to humans There are currently no reported cases of people catching the coronavirus from animals
If you can’t hold your breath for 10 seconds, you have a coronavirus disease You can not confirm coranovirus disease with breathing exercise
Drinking alcohol prevents Covid-19 Drinking alcohol does not protect against Covid-19 and can be dangerous
Eating raw lemon counters coronavirus No food cures or prevents disease
Zinc supplements can lower the risk of contracting Covid-19 No supplement cures or prevents disease
Vaccines against flu protect against the new coronavirus Vaccines against flu do not protect against the new coronavirus
The new coronavirus can be transmitted through mosquito The new coronavirus can not be transmitted through mosquito
Covid-19 can affect elderly only Covid-19 can affect anyone
Table 2: Sample of myths versus facts on Covid-19

Sample medical misconceptions on Covid-19 are collected in Table 2). Organisations such as WHO provides facts for some myths (denoted in the table).

Let for instance myth with the formalisation:


Assume the following formalisation for the corresponding fact :


The following line of reasoning signals that the ontology is inconsistent:


Here we need the subsumption relation between roles (). The reasoner finds that the individual (which is a mobile network by axiom (24)) that spreads (which is a virus by axiom (25)) is in conflict with the axiom (30).

As a second example, let the myth in Table 2:


The corresponding fact states:


The inconsistency will be detected on the Abox contains and individual affected by Covid-19 and who is not elderly:


We need also some background knowledge:


Based on the definition of and on jon’s age, the reasoner learns that does not belong to that concept (i.e ). From the inverse roles , one learns that the virus Covid-19 affects . Since the concept Covid-19 includes only the individual with the same name Covid-19 (defined with the constructor for nominals), the reasoner will be able to detect inconsistency.

Note that we need some background knowledge (like definition of ) to signal conflict. Note also the need of a trusted Covid-19 ontology.

There is ongoing work on formalising knowledge about Covid-19 . First,Coronavirus Infectious Disease Ontology (CIDO) 444 Second, the Semantics for Covid-19 Discovery555 adds semantic annotations to the CORD-19 dataset. The CORD-19 dataset was obtained by automatically analysing publications on Covid-19 .

Note also that the above formalisation was manually obtained. Yet, in most of the cases we need automatic translation from natural language to description logic.

4 Automatic conversion of the Covid-19 myth into Description Logic with FRED

Transforming unstructured text into a formal representation is an important task for the Semantic Web. Several tools are contributing towards this aim: FRED [12], OpenEI [17], controlled languages based approach (e.g. ACE), Framester [11], or KNEWS [2]. We here the FRED tool, that takes a text an natural language and outputs a formalisation in description logic.

FRED is a machine reader for the Semantic Web that relies on Discourse Representation Theory, Frame semantics and Ontology Design Patterns666 [8, 12]

. FRED leverages multiple natural language processing (NLP) components by integrating their outputs into a unified result, which is formalised as an RDF/OWL graph. Fred relies on several NLP knowledge resources (see Table  

3). VerbNet [19] contains semantic roles and patterns that are structure into a taxonomy. FrameNet [5] introduces frames to describe a situation, state or action. The elements of a frame include: agent, patient, time, location. A frame is usually expressed by verbs or other linguistic constructions, hence all occurrences of frames are formalised as OWL n-ary relations, all being instances of some type of event or situation.

We exemplify next, how FRED handles linked data, compositional semantics, plurals, modality and negations with examples related to Covid-19 :

4.1 Linked Data and compositional semantics

Ontology Prefix Name Space
Covid-19 myths covid19.m: /covid-19-myths.owl#
VerbNet roles vn.role:
VerbNet concepts
FrameNet frame ff:
FrameNet element fe:
DOLCE+DnS Ultra Light dul:
WordNet wn30:
Boxer boxer:
Boxing boxing:
DBpedia dbpedia: schemaorg:
Quantity q:
Table 3: FRED’s knowledge resources and their prefixes used for the Covid-19 myts ontology
Figure 1: Translating the myth: ”Hand dryers are effective in killing the new coronavirus” in description logic

Let the myth ”Hand dryers are effective in killing the new coronavirus”, whose automatic translation in DL appears in Figure 1. Fred creates the individual . The role from the boxing ontology is used to relate with the instance :


Note that is an instance of the concept from the DBpedia. The plural is formalised by the role from the ontology:


The information that hand dryers are effective is modeled with the role from the ontology:


Note also that the instance is related to the instance with the role :


The instance is identified as an instance of the verb from the VerbNet and also as an instance of the concept from the ontology:


Fred creates the new complex concept that is a subclass of the concept from DBpedia and has quality :


Here the concept is identified as a subclass of the concept from Dolce.

Note that Fred has successfully linked the information from the myth with relevant concepts from DBpedia, Verbnet, or Dolce ontologies. It also nicely formalises the plural of ”dryers”,uses compositional semantics for ”hand dryers” and ”new coronavirus”,

Here, the instance has the object as patient. (Note that the role has the semantics from the VerbNet ontology and there is no connection with the patient as a person suffering from the disease). Also the instance has Agent something (i.e. ) to which the is in:


The translating meaning would be: ”The situation involving hand dryers is in something that kills the new coronavirus”.

One possible flaw in the automatic translation from Figure 1 is that hand dryers are identified as the same individual as coronavirus:


This might be because the term ”are” from the myth (”Hand dryers are ….”) which signals a possible definition or equivalence. These flaw requires post-processing. For instance, we can automatically remove all the relations from the generated Abox.

Actually, the information encapsulated in the given sentence is: ”Hand dryers kill coronavirus”. Given this simplified version of the myth, Fred outputs the translation in Figure 2. Here the individual is correctly linked with the corresponding verb from VerbNet and also identified as an event in Dolce. The instance has the agent and the patient . This corresponds to the intended semantics: hand dryers kill coronavirus.

Figure 2: Translating the simplified sentence: ”Hand dryers kill coronavirus”

4.2 Modalities and disambiguation

Deceptive information makes extensively use of modalities.

Since OWL lacks formal constructs to express modality, FRED uses the Modality class from the Boxing ontology:

  • boxing:Necessary: e.g., will, must, should

  • boxing:Possible: e.g. may, might

where and

Figure 3: Translating myths with modalities: ”You should take vitamin C”

Let the following myth related to Covid-19 ”You should take vitamin C” (Figure 3). The frame is formalised around the instance . The instance is related to the corresponding verb from the VerbNet and also as an event from the Dolce ontology. The agent of the take verb is a person and has the modality . The individual is an instance of concept .

Although the above formalisation is correct, the following axioms are wrong. First, Fred links the concept Vitamin from the Covid-19 ontology with the Vitamin C singer from DBpedia. Second, the concept Person from the Covid-19 ontology is linked with Hybrid theory album from the DBpedia, instead of the Person from By performing word sense disambiguation (see Figure 4, Fred correctly links the vitamin C concept with the noun from WordNet that is a subclass of the concept in the word net and aslo of from Dolce.

Figure 4: Word sense disambiguation for: ”You should take vitamin C”

4.3 Handling negation

Figure 5: Formalising negations: ”You can not recover from the coronavirus infection”

Most of the myths are in positive form. For instance, in Table 2 only myths and includes negation. Let the translation of myth in Figure 5. The frame is built around the event ( is an instance of concept) Indeed, FRED signals that the event :

  • has truth value false (axiom 51)

  • has modality ”possible” (axiom 52)

  • has agent a person (axiom 53)

  • has source an infection of type coronavirus (axiom 54)


However, FRED does not make any assumption on the impact of negation over logical quantification and scope. The is the only element that one can use to signal conflict between positive and negated information.

5 Fake news detection by reasoning in Description Logics

Given a possible myth automatically translated by Fred into the ontology , we tackle to fake detection task with three approaches:

  1. signal conflict between and scientific facts also automatically translated by Fred

  2. signal conflict between and the Covid-19 ontology designed by the human agent

5.1 Detecting conflicts between automatic translation of myths and facts

  1. Translating the myth in DL using Fred:

  2. Translating the fact in DL using Fred

  3. Merging the two ontologies and

  4. Checking the coherence and consistency of the merged ontology

  5. If conflict is detected, Verbalise explanations for the inconsistency

  6. If conflict is not detected import relevant knowledge that may signal the conflict

Consider the pair:

Myth : Covid-19 can affect elderly only.
Fact : Covid-19 can affect anyone.
Figure 6: Step 1: Automatically translating the myth into DL: “Covid-19 can affect elderly only“
Figure 7: Step 2: Automatically translating the fact into DL: “Covid-19 can affect anyone.“
Figure 8: Step 3: Sample of knowledge from the merged ontology

Figure 8 shows the relevant knowledge used to detect conflict (Note that the prefix for the Covid-19 -Myths ontology has been removed). The FRED tool has detected the modality for the individual . The same instance has quality . However, the role relates the instance with two individuals: and .

Figure 9: Step 4: Conflict detection based on the pattern

The axioms in Figure 9 state that an elderly is a person and that the instance is not elderly. The conflict detection pattern is defined as: The SWRL rule states that for each individual with the quality that is related via the role with two distinct individuals and (where is an instance of the concept ), then the individual is also an instance of .

The conflict comes from the fact that is not an instance of , but still he/she is affected by COVID (i.e. ).



Model of the not-trusted information

Model of the trusted information

Machine generatedknowledge

Patterns for conflict detection (DL axioms SWRL rules)

Covid-19 ontology

Human generatedknowledge

OWL verbaliser


FRED translater


Racer reasoner

Reasoning in Description Logics

Figure 10: A Covid-19 ontology is enriched using FRED with trusted facts and medical myths. Racer reasoner is used to detect inconsitencies in the enriched ontology, based on some patterns manually formalised in Description Logics or SWRL

The system architecture appears in Figure 10. We start with a core ontology for Covid-19 . This ontology is enriched with trusted facts on COVID using the FRED converter. Information from untrusted sources is also formalised in DL using FRED. The merged axioms are given to Racer that is able to signal conflicts.

To support the user understanding which knowledge from the ontology is causing incoherences, we use the Racer’s explanation capabilities. RacerPro provides explanations for unsatisfiable concepts, for subsumption relationships, and for unsatisfiable A-boxes through the commands (check-abox-coherence), (check-tbox-coherence) and (check-ontology) or (retrieve-with-explanation). These explanations are given to an ontology verbalizer in order to generated natural language explanation of the conflict.

We aim to collect a corpus of common misconceptions that are spread in online media. We aim to analyse these misconceptions and to build evidence-based counter arguments to each piece of deceptive information. We aim to annotate this corpus with concepts and roles from trusted medical ontologies.

6 Discussion and related work

Our topic is related to the more general issue of fake news [9]. Particular to medical domain, there has been a continuous concern of reliability of online heath information [1]. In this line, Waszak et al. have recently investigated the spread of fake medical news in social media [22]. Amith and Tao have formalised the Vaccine Misinformation Ontology (VAXMO) [3]. VAXMO extends the Misinformation Ontology, aiming to support vaccine misinformation detection and analysis777

Teymourlouie et al. have recently analyse the importance of contextual knowledge in detecting ontology conflicts. The added contextual knowledge is applied in [20]888 to the task fo debugging ontologies. In our case, the contextual ontology is represented by patterns of conflict detection between two merged ontologies. The output of FRED is given to the Racer reasoner that detects conflict based on trusted medical source and conflict detection patterns.

FiB system [10] labels news as verified or non-verified. It crawls the Web for similar news to the current one and summarised them The user reads the summaries and figures out which information from the initial new might be fake. We aim a step forward, towards automatically identify possible inconsistencies between a given news and the verified medical content.

MERGILO tool reconciles knowledge graphs extracted from text, using graph alignment and word similarity 

[2]. One application area is to detect knowledge evolution across document versions. To obtain the formalisation of events, MERGILO used both FRED and Framester. Instead of using metrics for compute graph similarity, I used here knowledge patterns to detect conflict.

Enriching ontologies with complex axioms has been given some consideration in literature [13, 14]. The aim would be bridge the gap between a document-centric and a model-centric view of information [14]

. Gyawali et al translate text in the SIDP format (i.e. System Installation Design Principle) to axioms in description logic. The proposed system combines an automatically derived lexicon with a hand-written grammar to automatically generates axioms. Here, the core Covid-19 ontology is enriched with axioms generated by Fred fed with facts in natural language. Instead of grammar, I formalised knowledge patterns (e.g. axioms in DL or SWRL rules) to detect conflicts.

Conflict detection depends heavily on the performance of the FRED translator One can replace FRED by related tools such as Framester [11] or KNEWS [6]. Framester is a large RDF knowledge graph (about 30 million RDF triples) acting as a umbrella for FrameNet, WordNet, VerbNet, BabelNet, Predicate Matrix. In contrast to FRED, KNEWS (Knowledge Extraction With Semantics) can be configured to use different external modules a s input, but also different output modes (i.e. frame instances, word aligned semantics or first order logic999 Frame representation outputs RDF tuples in line with the FrameBase101010 model. First-order logic formulae in syntax similar to TPTP and they include WordNet synsets and DBpedia ids as symbols [6].

7 Conclusion

Even if fake news in the health domain is old hat, many technical challenges remain to effective fight against medical myths. This is preliminary work on combining two heavy machineries: natural language processing and ontology reasoning aiming to signal fake information related to Covid-19 .

The ongoing work includes: i) system evaluation and ii) verbalising explanations for each identified conflict.


  • [1] S. A. Adams (2010) Revisiting the online health information reliability debate in the wake of web 2.0: an inter-disciplinary literature and website review. International Journal of Medical Informatics 79 (6), pp. 391 – 400. Note: Special Issue: Information Technology in Health Care: Socio-technical Approaches External Links: ISSN 1386-5056, Document, Link Cited by: §6.
  • [2] M. Alam, D. R. Recupero, M. Mongiovi, A. Gangemi, and P. Ristoski (2017) Event-based knowledge reconciliation using frame embeddings and frame similarity. Knowledge-Based Systems 135, pp. 192–203. Cited by: §4, §6.
  • [3] M. Amith and C. Tao (2018) Representing vaccine misinformation using ontologies. Journal of biomedical semantics 9 (1), pp. 22. Cited by: §6.
  • [4] F. Baader, D. Calvanese, D. McGuinness, P. Patel-Schneider, D. Nardi, et al. (2003) The description logic handbook: theory, implementation and applications. Cambridge university press. Cited by: §2.1, footnote 2.
  • [5] C. F. Baker, C. J. Fillmore, and J. B. Lowe (1998) The berkeley framenet project. In Proceedings of the 17th international conference on Computational linguistics-Volume 1, pp. 86–90. Cited by: §4.
  • [6] V. Basile, E. Cabrio, and C. Schon (2016) KNEWS: using logical and lexical semantics to extract knowledge from natural language. Cited by: §6.
  • [7] M. Cinelli, W. Quattrociocchi, A. Galeazzi, C. M. Valensise, E. Brugnoli, A. L. Schmidt, P. Zola, F. Zollo, and A. Scala (2020) The covid-19 social media infodemic. arXiv preprint arXiv:2003.05004. Cited by: Detecting fake news for the new coronavirus by reasoning on the Covid-19 ontology, §1.
  • [8] F. Draicchio, A. Gangemi, V. Presutti, and A. G. Nuzzolese (2013) Fred: from natural language text to rdf and owl in one click. In Extended Semantic Web Conference, pp. 263–267. Cited by: §4.
  • [9] A. Figueira and L. Oliveira (2017) The current state of fake news: challenges and opportunities. Procedia Computer Science 121, pp. 817 – 825. External Links: ISSN 1877-0509, Document, Link Cited by: §6.
  • [10] A. Figueira and L. Oliveira (2017) The current state of fake news: challenges and opportunities. Procedia Computer Science 121, pp. 817–825. Cited by: §6.
  • [11] A. Gangemi, M. Alam, L. Asprino, V. Presutti, and D. R. Recupero (2016) Framester: a wide coverage linguistic linked data hub. In European Knowledge Acquisition Workshop, pp. 239–254. Cited by: §4, §6.
  • [12] A. Gangemi, V. Presutti, D. Reforgiato Recupero, A. G. Nuzzolese, F. Draicchio, and M. Mongiovì (2017) Semantic web machine reading with fred. Semantic Web 8 (6), pp. 873–893. Cited by: Detecting fake news for the new coronavirus by reasoning on the Covid-19 ontology, §1, §4, §4.
  • [13] M. Georgiu and A. Groza (2011) Ontology enrichment using semantic wikis and design patterns. Studia Universitatis Babes-Bolyai, Informatica 56 (2), pp. 31. Cited by: §6.
  • [14] B. Gyawali, A. Shimorina, C. Gardent, S. Cruz-Lara, and M. Mahfoudh (2017) Mapping natural language to description logic. In European Semantic Web Conference, pp. 273–288. Cited by: §6.
  • [15] V. Haarslev, K. Hidde, R. Möller, and M. Wessel (2012) The racerpro knowledge representation and reasoning system. Semantic Web 3 (3), pp. 267–277. Cited by: Detecting fake news for the new coronavirus by reasoning on the Covid-19 ontology, §1.
  • [16] D. M. Lazer, M. A. Baum, Y. Benkler, A. J. Berinsky, K. M. Greenhill, F. Menczer, M. J. Metzger, B. Nyhan, G. Pennycook, D. Rothschild, et al. (2018) The science of fake news. Science 359 (6380), pp. 1094–1096. Cited by: §1.
  • [17] J. L. Martinez-Rodriguez, I. Lopez-Arevalo, and A. B. Rios-Alvarado (2018) Openie-based approach for knowledge graph construction from text. Expert Systems with Applications 113, pp. 339–355. Cited by: §4.
  • [18] C. Roussey and A. Zamazal (2013) Antipattern detection: how to debug an ontology without a reasoner. Cited by: footnote 3.
  • [19] K. K. Schuler (2005) VerbNet: a broad-coverage, comprehensive verb lexicon. Cited by: §4.
  • [20] M. Teymourlouie, A. Zaeri, M. Nematbakhsh, M. Thimm, and S. Staab (2018) Detecting hidden errors in an ontology using contextual knowledge. Expert Systems with Applications 95, pp. 312–323. Cited by: §6.
  • [21] S. Vosoughi, D. Roy, and S. Aral (2018) The spread of true and false news online. Science 359 (6380), pp. 1146–1151. Cited by: §1.
  • [22] P. M. Waszak, W. Kasprzycka-Waszak, and A. Kubanek (2018) The spread of medical fake news in social media–the pilot quantitative study. Health Policy and Technology. Cited by: §6.