Mining Arguments from Cancer Documents Using Natural Language Processing and Ontologies

07/27/2016 ∙ by Adrian Groza, et al. ∙ UTCluj 0

In the medical domain, the continuous stream of scientific research contains contradictory results supported by arguments and counter-arguments. As medical expertise occurs at different levels, part of the human agents have difficulties to face the huge amount of studies, but also to understand the reasons and pieces of evidences claimed by the proponents and the opponents of the debated topic. To better understand the supporting arguments for new findings related to current state of the art in the medical domain we need tools able to identify arguments in scientific papers. Our work here aims to fill the above technological gap. Quite aware of the difficulty of this task, we embark to this road by relying on the well-known interleaving of domain knowledge with natural language processing. To formalise the existing medical knowledge, we rely on ontologies. To structure the argumentation model we use also the expressivity and reasoning capabilities of Description Logics. To perform argumentation mining we formalise various linguistic patterns in a rule-based language. We tested our solution against a corpus of scientific papers related to breast cancer. The run experiments show a F-measure between 0.71 and 0.86 for identifying conclusions of an argument and between 0.65 and 0.86 for identifying premises of an argument.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Consider the recent contradictory results on cancer published in the distinguished journals Science and Nature. On the one hand we have the advocaters of the so called “back-luck of cancer”. The study in [14] supports the idea that random mutations in healthy cells may explain two-thirds of cancers. These results suggest that most cancer cases can not be prevented. One positive side of this randomness of cancer is that it helps cancer patients to know that is not their fault [2]. The intriguing correlations discovered by Tomasetti and Vogelstein contradict the older landmark paper [4] of Doll and Peto arguing that most cancers could be prevented by removing various lifestyles. On the other hand, the advocaters of risk factors of cancer provide set of counter-arguments against [14] through the voices of Wodarz and Zauber [15].

The above example is a good instantiation of the problems arising by continuous stream of scientific research that contains contradictory results supported by arguments and counter-arguments. As medical expertise occurs at different levels, part of the human agents have difficulties to face the huge amount of studies, but also to understand the reasons and pieces of evidences claimed by the proponents and the opponents of debate topic. To better understand the supporting arguments for new findings related to current state of the art in the medical domain we need tools able to identify arguments in scientific papers. Our work here aims to fill the above technological gap.

Quite aware of the difficulty of this task, we embark to this road by relying on the well-known interleaving of domain knowledge with natural language processing. To formalise the existing medical knowledge, we rely on ontologies. To structure the argumentation model we use also the expressivity and reasoning capabilities of Description Logics. To perform argumentation mining we formalise various linguistic patterns in a rule-based language. We tested our solution against a corpus of scientific papers related to breast cancer.

In the breast cancer domain, where monthly appear more and more articles, this being a disease very spread across women in many countries. The recent proliferation of the on-line publication of medical research articles has created a critical need for information access tools that help stakeholders in the medical domain.

Because of the amount of information about a particular subject, data mining brings a set of tools and techniques that can be applied to this processed data to discover hidden patterns. This provides healthcare professionals an additional source of knowledge for making decisions. Current limitations or challenges in data mining for healthcare include information from heterogeneous sources present challenges or missing values, noise and outliers.

We proposed argumentation as the underlying technological instrumentation having the purpose of helping healthcare professionals for supporting decision making. This research focus on understanding by generating cognitive maps or argumentation graphs. Argumentation is the process where arguments are structured and evaluated based on the their interactions with each other [9, 10]. An argument consists of a set of premises, offered with the purpose of supporting the claim. Argumentation may also involve chains of reasoning, where claims are used as premises for deriving further claims. Argumentation mining [11, 12] is a new research area that combines Natural Language Processing (NPL) with the argumentation theories and question answering. Argumentation mining aims to automatically detect arguments from text documents, including other functionalities like the structure of an argument and the relationship between them.

The remaining of this paper is structured as follows: Section II introduces the technical instrumentation used throughout the paper. Section III details the architecture of the system. Section IV details the running experiments. Section V discusses related work and section VI concludes the paper.

Ii Argumentation model

This section formalises in Description Logic (DL) the argumentation model. First, we introduce the basic terminology of DLs. Second, we detail the argumentation model used for the argumentation mining task.

Ii-a Description logics

In Description Logics (DLs) concepts are built using the set of constructors formed by negation, conjunction, disjunction, value restriction, and existential restriction [1] (Table I). Here, and represent concepts and is a role. The semantics is defined based on an interpretation , where the domain of contains a non-empty set of individuals, and the interpretation function maps each concept to a set of individuals and each role to a binary relation . The last column of Table I shows the extension of for non-atomic concepts.

Constructor Syntax Semantics
negation
conjunction
disjunction
existential restriction
value restriction
individual assertion
role assertion
TABLE I: Syntax and semantics of .

A knowledge base (KB) is formed by a terminological box and an assertional box . The contains terminological axioms of the forms or . The represents a finite set of concept assertions a:C or role assertions (a,b):r, where is a concept, a role, and and are two individuals. A concept is satisfied if there exists an interpretation such that . The concept subsumes the concept , represented by if for all interpretations . Constraints on concepts (i.e. disjoint) or on roles (domain, range, inverse role, or transitive properties) can be specified in more expressive description logics111We provide only some basic terminologies of description logics in this paper to make it self-contained. For a detailed explanation about families of description logics, the reader is referred to [1]..

(1)
(2)
(3)
(4)
(5)
(6)
Fig. 1: TBox example in the argumentation domain.

The Tbox in Fig. 1 introduces the subconcept ClinicalArgument which is a particular type of argument. An argument can support another argument via the role supports that has as domain the concept Argument (line 2) and the same range Argument in line 3. Line 4 specifies that the role supports is transitive. The attack relationship between two arguments is modeled by the role attacks which has as domain and range the set of arguments (lines 5 and 6). The following Abox contains the individual of type ClinicalArgument which has the premise and the claim . , , .

Ii-B Breast cancer ontology

We are interested in breast cancer ontologies. The Breast Cancer Grading Ontology (BCGO) assigns a grade to a tumor starting from the three criteria of the NGS, being part of the Biological Process category.

The Tbox in Fig. 2 introduces concepts like Cancer which is a particular type of Disease and BreastCancer is a particular type of Cancer (axioms 7, 8). A disease has symptoms, presented by the role manifestedSymptom that has as range the concept Symptom (axiom 9). One or more treatments can be recommended via the role appliedTreatment that has as range the concept Treatment (axiom 10). Breast cancer heavily affects all fields of the human life. This is modeled by the role affectedDomain Domain (axiom 11).

People are implied here, like doctors and patients, this being presented by the impliedPerson role, which has as range the concept Person (axiom 12). Breast cancer has characteristics, this being modeled by the role haveCharacteristic with range in the concept Characteristic (axiom 13). In some cases the people involved of affected by this disease are numbered and for this is used the role haveQuantifier with the domain People and the range Quantifier (axioms 14, 15).

(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
Fig. 2: TBox example in the cancer domain.

The Abox in Fig. 3 contains the individual of type BreastCancer instantiated with ”Angiosarcoma”. This individual manifest the symptom ”Skin irritation or dimpling” and implied the persons ”Doctors” and ”Woman”. The treatment applied for this instance is the individual ”Chemotherapy”. Also, this disease affected the individual domain ”Family history” and have as characteristics ”Hormone receptivity” and ”High levels of HER2”.

(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
Fig. 3: Abox with an instance of breast cancer disease. Here a is a shortcut of ”Angiosarcoma”.

The system uses cancer ontology to build more specific lists of words. The terminology is input to text files such as cancerRelatedWords.lst for terms relating to the cancer domain and peopleInvolved.lst for terms that may indicate people involved or affected by this disease. The lists are used by a gazetteer that associates the terms with a majorType such as ”CancerRelatedWords” or ”PeopleInvolved”. JAPE rules convert these to annotations that can be visualised and queried.

For example, suppose a text has a token term ”breast cancer” and GATE has a gazetteer list with ”breast cancer” on it; GATE (see [3]) finds the string on the list, then annotates the token with majorType as ”CancerRelatedWords”; we convert this into an annotation that can be visualised or searched such as CancerRelatedWords. A range of terms that may indicate people involved are all annotated with ”PeopleInvolved”.

The tool can also create annotations for complex concepts out of lower level annotations. In this way, the gazetteer provides a cover concept for related terms that can be queried or used by subsequent annotation processes. The advantage of using ontologies for making the lists specified above is that they can help build more powerful and more interoperable information systems in healthcare.

Ii-C Argumentation model

An argument contains an exactly one conclusion and a set of supporting premises . The definition in DL follows:

(24)

We assume that claims and premises have textual descriptors and are signaled by specific lexical indicators.

(25)
(26)

We rely on textual indicators classified in several concepts:

(27)

Inheritance between roles has been enacted for subroles hasPremiseIndicator and hasClaimIndicator:

(28)
(29)
(30)
Example 1 (Sample of claim indicators).

The following lexical indicators usually signal a claim and they are instances of the concept ClaimIndicator (Ci): ”consequently”, ”therefore”, ”thus”, ”so”, ”hence”, ”accordingly”, ”we can conclude that”, ”it follows that”, ”we may infer that”, ”this means that”, ”it leads us to believe that”, ”this bears out the point that”, ”which proves/implies that”, ”as a result”

Example 2 (Sample of premise indicators).

The following expressions usually signal a premise and they are instances of the concept PremiseIndicator (Pi): ”since”, ”because”, ”for”, ”whereas”, ”in as much as”, ”for the reasons that”, ”in view of the fact”, ”as evidenced by”, ”given that”, ”seeing that”, ”as shown by”, ”assuming that”, ”in particular”.

The MacroIndicator is formed by several words, one of then indicating verbs related to claim or premise:

(31)
Example 3 (Verbs related to claim).

The following verbs usually signal a claim and they are instances of the concept ConclusionVerbs (Vc): ”to report”, ”to believe”, ”to assesse”, ”to identify”, ”to highlight”, to be essential”, ”to confirm”, ”to estimate”, ”to provide”, ”to express”, ”to experience”, ”to recall”, ”to accept”, ”to reflect”, ”to categorize”, ”to indicate”, ”to exemplify”, ”to define”, ”to show”, ”to qualify”

Example 4 (Verbs related to premise).

The following verbs usually signal a premise they are instances of the concept PremiseVerbs (Vp): ”to note”, ”to subdivide”, ”to contain”, ”to result”, ”to observe”, ”to accord”, ”to regard”, ”to feel”, ”to show”, ”to receive”, ”to examine”, ”to report”, ”to transcribe”, ”to encompass”.

Note that the premise indicators might appear after the conclusion was stated, as example 5 illustrates.

Example 5 (Premise indicator before the premise).

Consider the phrase:

”[Spirituality was highlighted as a fundamental component of the healing process]. [[In particular], survivors noted that their faith in God’s direction over the doctors healed them.]

The text is annotated with the claim and premise which has a premise indicator (PI), namely ”In particular”. The argument has its claim on the first position and one premise following the claim. The premise indicator precedes its premise. The corresponding Abox follows:

(32)
(33)
(34)

Based on the above identified information, the system classifies argument as ClaimPremiseArgument.

Consider the medical argument in example 6 :

Example 6 (Argument example).

[Key informants highlighted] spirituality as a very important component of many women’s cancer experience. These communities, particularly African American, Asian and Latina, hold firm religious and spiritual beliefs and practices. [[In particular], many have an unshakable belief in the power of prayer, putting more importance on spirituality, their religious beliefs than on health care providers.]

The argument structure is formalised by:

(35)
(36)
(37)

There are arguments in which premise precedes the conclusion, but also arguments in which the premise appears after the claim.

(38)
(39)

The roles and are inverse and transitive roles, with both the domain and range represented by sentences:

(40)
(41)
(42)
(43)
(44)

Instances of and are illustrated in examples 7 and 8.

Example 7 (PCArgument).

As premise appears before the claim, the identified argument is classified as a PCArgument:

”[For women with non proliferative findings, no family history, a weak family history of breast cancer], [doctors reported no increased risk.”]

Example 8 (CPArgument).

Consider the text:

”[[Patients report] on the risk of breast cancer] [according to histologic findings, the age at diagnosis of benign breast disease, the strength of the family history.”]

This sentences presents first the part introduced by the macro identifier ”Patients report”, followed by the part. Hence, this argument is classified as a CPArgument.

Iii System architecture

The developed argumentation mining system in Fig. 4 has four components: the Gate editor, text processing component, argument identification modules and the knowledge module.

The first layer consists of the GATE Editor [3] and a query interface for the updated ontology. The second layer is composed by the text processing component performs the Natural Language Processing transformations required for extracting arguments. The argument processing modules aims to identify argumentative sentences in the text. Using the TBox and the ABox the system can save in the ontology the structure of the new arguments. The cancer ontology is used for creating the lists of words used inside the JAPE rules for identifying the argument structure. The TBox related to arguments stores the definitions of an Argument formed by Claim and one or more Premise. The ABox related to arguments contains the character instances the application found in the text document. This TBox is used to generate the lists of ClaimIndicator, PremiseIndicator, VerbRelatedToClaim and VerbRelatedToPremise.

Fig. 4: Architecture of the system

We apply the tool to the detection of arguments from different articles within the breast cancer domain.

Racer [8] was used to perform reasoning in DL and query the system. Using Racer, the system saves the newly arguments detected in the breast cancer documents into an Abox. The resulted ontology is used for query answering.

Iii-a Jape rules

For identifying the claim and the premises, we use JAPE (Java Annotation Patterns Engine) rules [3]. A JAPE grammar consists of a set of phases, each of which consists of a set of pattern/action rules. The left-hand-side (LHS) of the rules consists of an annotation pattern description. The right-hand-side (RHS) consists of annotation manipulation statements. Annotations matched on the LHS of a rule may be referred to on the RHS by means of labels that are attached to pattern elements.

Input: - breast cancer ontology; - argumentation model (arg Tbox);
- corpus of medical documents
Output: , Abox containing mined arguments
1 foreach do
2       foreach do
3             if then
4               if ClaimMacro then
5                 if then
6                   
7                   if PremiseMacro then
8                   
9                 else
10                 
11                  if PremiseMacro then
12                   
13             else
14               if ClaimMacro then
15                 
16               else
17                 if PremiseMacro then
18                  
19            
20      
Algorithm 1 Argumentation mining with patterns.

The top level approach for detecting arguments is formalised in Algorithm 1. First, the system analysis every documents from the corpus of available medical documents (line 1). Each document is tokenised (line 3) and for each sentence the system verifies if a coordinating conjunction () exists in the sentence (line 4). If such is found, the algorithm searches an instance of the concept ClaimMacro, as a possible indicator for an argument claim (line 5). The tool looks after the conjunction and verifies if the PremiseMacro is present, then the Premise is identified (lines 8 and 9). If the offset of the ClaimMacro is higher than the CC offset than the system looks for the Claim and Premise inside the sentences (lines 11, 12 and 13).

If within sentences does not contain a coordinating conjunction (line 14) then an instance of ClaimMacro or PremiseMacro is searched. Depending on its presence the tool determines whether that sentence can be associated to an annotation between Claim or Premise (lines 15, 16, 17, 18, 19). If the sentence does not contain a coordinating conjunction or ClaimMacro or PremiseMacro than the system analyses the next sentence in the set of Sentences.

The system uses macros, indicators to decide if a sentence or a part of this sentence is a potential candidate to Claim or Premise.

The following templates for claim were used: i) Ci: claim indicator; ii) CbPe: claim indicator plus people involved; iii) CbPebVc: claim indicators followed by people involved and specific verbs to conclusion; iiii) CbVc: people involved and than verbs for claim; v) ElOfCnbCw: word or words expressing elements of cancer followed by a word that refers to cancer; after this macro can be present or not a verb that refers to claim; v) CbQ: people can have quantifiers before them representing the number of them. We rely on the textual indicators, classified as follows : .

The macro ClaimIndicator before People (MI_CbPe) contains ClaimIndicator succeeded by people involved in the medical domain of breast cancer:

MI: ClaimIndicator before People≑MI_CibPe ci:ClaimIndicator
pe:Person
(ci,pe):before
(sentence,ci):hasToken
(sentence,pe):hasToken Ex_1: [”we may [infer] that [woman]”]
Ex_2: [”this [bears out] the point that [doctors]”]
Ex_3: [”it [follows] that [patiences]”]

The sentence is formalised as an instance of MI_CibPe in Example 9.

Example 9.

CibPe macro contains individuals of type ClaimIndicator and of type Person. Within (lines 47, 48), is located before (line 46). The text of both individuals is presented via the role hasText (line LABEL:eq:15).

(45)
(46)
(47)
(48)
(49)

The macro ClaimIndicator before People before Verb related to Claim (MI_CibPebVc) is a generalisation of the macro ClaimIndicator before People (MI_CibPe) plus verbs related to conclusion (recall Example 3):

MI: ClaimIndicator before People before Verb≑MI_CibPebVc ci:ClaimIndicator
pe:Person
vc:VerbRelatedToClaim
(ci,pe):before
(pe,vc):before
(sentence,ci):hasToken
(sentence,pe):hasToken
(sentence,vc):hasToken Ex_1: [”we can [conclude] that [doctors] [identified]”]
Ex_2: [”[so] the [key informants] [provides]”]
Ex_3: [”it [follows] that [people] [estimated]”]

The macro ClaimIndicator before Verb related to Claim (MI_CbVc) contains expressions that are instances of the concept ClaimIndicator followed by verbs related to conclusion (recall Example 3):

MI: ClaimIndicator before Verb≑MI_CibVc ci:ClaimIndicator
vc:VerbRelatedToClaim
(ci,vc):before
(sentence,ci):hasToken
(sentence,vc):hasToken Ex_1: [”[therefore] [exemplifies]”]
Ex_2: [”[so] [highlighted]”]
Ex_3: [”[thus] [accepted]”]

The expression is formalised as an instance of MI_CbVc as Example 10 illustrates.

Example 10.

CibVc macro contains the individuals of type ClaimIndicator and of type VerbRelatedToClaim. Inside the (line 52), is located before (line 51). The text of both individuals is presented via the role hasText (line 53).

(50)
(51)
(52)
(53)

The macro Elements of Cancer before Cancer related words (MI_ElOfCnbCwVc) contains expressions that are composed by one or more words that are instances of the concept ElementsOfCancer plus a word from the cancer domain and optional can be succeeded by a verb related to claim (recall Example 3):

MI: Elements of Cancer before Cancer words≑MI_ElOfCnbCw elOfCn:ElementsOfCancer
cw:CancerWords
(elOfCn,cw):before
(sentence,elOfCn):hasToken
(sentence,cw):hasToken Ex_1: [”the [risk] of [breast cancer]”]
Ex_2: [”these [factors] of [cancer] [were equaled]”]

The macro Qualifiers before People (MI_QbPebVc) contains qualifiers before the instances of the concept PeopleInvolved, meaning that more people are involved :

MI: Qualifiers before People before Verb≑MI_QbPebVc q:Qualifier
pe:Person
vc:VerbRelatedToClaim
(q,pe):before
(pe,vc):before
(sentence,q):hasToken
(sentence,pe):hasToken
(sentence,vc):hasToken Ex_1: [”[many] [woman] [provides]”]
Ex_2: [”[many] [survivors] [accepted]”]

The following templates for premises were searched: i) PibPe: premise indicators; ii) PibPebVp: premise indicators plus people involved; iii) PibPebVp: premise indicators followed by people involved and specific verbs to premise; iiii) PibVp: people involved and than verbs for premise v) ElOfCnbCw: word or words expressing elements of cancer followed by a word that refers to cancer; after this macro can be present a verb that refers to claim or not; vi) DbVp: words that express domains affected of breast cancer followed by verbs specific to premise; We rely on the textual indicators: .

The macro PremiseIndicator before People (MI_PibPe) contains expressions that are instances of the concept that contains PremiseIndicator followed by people involved in the medical domain of breast cancer :

MI: PremiseIndicator before People≑MI_PibPe pi:PremiseIndicator
pe:Person
(pi,pe):before
(sentence,pi):hasToken
(sentence,pe):hasToken Ex_1: [”[in view of the fact] that [woman]”]
Ex_2: [”[as shown] by [doctors]”]
Ex_3: [”[since] [patiences]”]

The macro PremiseIndicator before People before Verb related to premise (MI_PibPebVp) is a generalisation of the macro PremiseIndicator before People (MI_PibPe) plus verbs related to premise (recall Example 4):

MI: PremiseIndicator before People before Verb≑MI_PibPebVp pi:PremiseIndicator
pe:People
vp:VerbRelatedToPremise
(pi,pe):before, (pe,vp):before
(sentence,pi):hasToken
(sentence,vp):hasToken
(sentence,pe):hasToken Ex_1: [”as [evidenced] by [people] [received]”]
Ex_2: [”[assuming] that [doctors] [observed]”]
Ex_3: [”[because] the [key informants] [were noted]”]

The expression is formalised as an instance of the MI_PibPebVp macro indicator, as Example 11 illustrates.

Example 11.

PibPebVp macro contains the individuals of type PremiseIndicator, of type People and of type VerbRelatedToPremise. is located before and is located before (line 55).

(54)
(55)
(56)
(57)
(58)
(59)

The macro PremiseIndicator before Verb (MI_PibVp) contains expressions that are instances of the concept PremiseIndicator succeeded by verbs related to premise: MI: PremiseIndicator before Verb≑MI_PibVp pi:PremiseIndicator
vp:VerbRelatedToPremise
(pi,vp):before
(sentence,pi):hasToken
(sentence,vp):hasToken Ex_1: [”[since] [according]”]
Ex_2: [”[given] that [noted]”]
Ex_3: [”[seeing] that [served]”]

The macro Elements of Cancer before Cancer related words (MI_ElOfCnbCw) contains the expressions that are composed by one or more words that are instances of the concept ElementsOfCancer plus a word from the cancer domain and optional can be succeeded by a verb related to premise (recall Example 4):

MI: Elements of Cancer before Cancer words≑MI_ElOfCnbCw elOfCn:ElementsOfCancer
cw:CancerWords
(elOfCn,cw):before
(sentence,elOfCn):hasToken
(sentence,cw):hasToken Ex_1: ”[the [risk] of [breast cancer] [was noted]”]

The macro Domains affected before Verb (MI_DbVp) contains expressions that are composed by domains affected by breast cancer followed by a verb related to premise.

MI: Domains affected before Verb≑MI_DbVp d:Domains
vp:VerbRelatedToPremise
(d,vp):before
(sentence,d):hasToken
(sentence,vp):hasToken Ex_1: [”[Family history] [regarding]”]
Ex_2: [”[physical changes] [resulting]”]

The expression is formalised as an instance of MI_DbVp as example 12 illustrates.

Example 12.

DbVp macro contains the individuals of type Domains and of type VerbRelatedToPremise. Inside the (line 61), is located before (line 60). The text of the individuals are presented via the role hasText (line 62).

(60)
(61)
(62)

Iv Running experiments

To identify text fragments that can be used to instantiate the argumentation schemes, we use ANNIE to investigate the entire corpus. Figure 5 shows a result of a search for Claim and Premise annotation, where the ”Context” represents a sentences from a document. Based on the Jape rules, the system need to know if the coordinating conjunction and the macro of claim or premise is present inside the sentence.

We can also look at annotations inside a text. Figure 6 shows one paragraph of a document, with a variety of annotation types, where are highlighted different annotation types; from this text the tool extracts word related to Cancer, People involved, Domains affected by breast cancer and Qualifiers.

Fig. 5: Sample output from an ANNIE search
Fig. 6: An annotated text from breast cancer articles.

The arguments were identified throw a corpus formed by six text documents related to breast cancer. In every text document are identified between five and ten arguments.

The quantitative evaluation is based on the measure of the Precision, Recall and F-Measure metrics. The system was evaluated on a manually annotated corpus containing six documents with different breast cancer articles. The quantitative metrics were obtained with the Diff plugin, integrated in GATE, applied to each document The percentage obtained for Claimand for Premise identification in Table II. The results obtained by the system are influenced by the performance of the correct identification of different parts of speech. Arguments are identified by the application according to lists of words and part of speech obtained by the MiniPar Parser included in GATE. If the annotations are not correctly identified this will limit the performance of the system.

# Recall Precision F
1 0.875 0.715 0.775
2 0.815 0.665 0.725
3 0.75 0.675 0.71
4 0.75 0.9 0.86
5 0.9 0.65 0.78
6 0.88 0.715 0.775
# Recall Precision F
1 0.69 0.75 0.65
2 0.85 0.58 0.7
3 0.91 0.68 0.75
4 0.75 0.9 0.86
5 0.9 0.75 0.83
6 0.96 0.76 0.83
TABLE II: Identifying claim of arguments (left) and premises (right).

V Related work and discussion

There are several tools used for identifying arguments inside texts using natural language processing. Rules have been extracted from scientific papers using SWRL in Controlled English (SR-CE) in FluentEditor [16]. [7] has proposed a specification of ten causal argumentation schemes used to detect arguments for scientific claims in genetics research journal articles. The specifications and some of the examples from which they were derived were used to create an initial draft of guidelines for annotation of a corpus. Feng and Hirst [6] have investigated argumentation scheme recognition using the Araucaria corpus, which contains annotated arguments from newspaper articles, parliamentary records, magazines, and on-line discussion boards (Reed et al. 2010). Taking premises and conclusion as given, Feng and Hirst addressed the problem of recognizing the name of the argumentation scheme for the five most frequently occurring schemes of Walton [5] in the corpus: Argument from example, Argument from cause to effect, Practical reasoning, Argument from Consequences, and Argument from Verbal Classification.

Other applications [13] have used annotations made by hand. There is no automatic detection of annotations, discourse indicators as well as user, domain, and sentiment terminology being identify manually. The difference between our system and this tool is based on this identification. Our application uses JAPE rules implemented in GATE for the identification of the claim and premise. Other researchers [17] discuss the architecture and development of an Argument Workbench, which is a interactive, integrated, modular tool set to extract, reconstruct, and visualise arguments. The Argument Workbench supports an argument engineer to reconstruct arguments from textual sources, using information processed at one stage as input to a subsequent stage of analysis, and then building an argument graph. The tool harvest and preprocess comments; highlight argument indicators, speech act and epistemic terminology; model topics; and identify domain terminology. The argument engineer analysis the output and then the input is extracted into the DebateGraph visualisation tool.

Vi Conclusion

Here we integrated ontologies and NLP for identifying arguments from breast cancer articles. The contributions of this paper are: Firstly, we formalised an argumentation model in description logics. Hence, the arguments can be automatically classified, reasoning services of DL can be used on the model and the arguments can be retrieved by querying the ontology. Secondly, we developed o tool able to perform argumentation mining. During mining, the tool uses concepts and roles within a breast cancer ontology. By changing the domain ontology, the tool can be applied to a different domain.

References

  • [1] F. Baader, The description logic handbook: theory, implementation, and applications.   Cambridge university press, 2003.
  • [2] J. Couzin-Frankel, “The bad luck of cancer,” Science, vol. 347, no. 6217, pp. 12–12, 2015.
  • [3] H. Cunningham, “Gate, a general architecture for text engineering,” Computers and the Humanities, vol. 36, no. 2, pp. 223–254, 2002.
  • [4] R. Doll and R. Peto, “The causes of cancer: quantitative estimates of avoidable risks of cancer in the united states today,” Journal of the National Cancer Institute, vol. 66, no. 6, pp. 1192–1308, 1981.
  • [5] F. M. Douglas Walton, Chris Reed, Argumentation Schemes.   Cambrige University Press, 2008.
  • [6] V. W. Feng and G. Hirst, “Classifying arguments by scheme,” in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1.   Association for Computational Linguistics, 2011, pp. 987–996.
  • [7] N. L. Green, “Identifying argumentation schemes in genetics research articles,” NAACL HLT 2015, p. 12, 2015.
  • [8] V. Haarslev, K. Hidde, R. Möller, and M. Wessel, “The RacerPro knowledge representation and reasoning system,” Semantic Web Journal, vol. 3, no. 3, pp. 267–277, 2012.
  • [9] I. A. Letia and A. Groza, Agreeing on Defeasible Commitments.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 156–173. [Online]. Available: http://dx.doi.org/10.1007/11961536_11
  • [10] ——, Arguing with Justifications between Collaborating Agents.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 102–116. [Online]. Available: http://dx.doi.org/10.1007/978-3-642-33152-7_7
  • [11] R. Mochales and M.-F. Moens, “Argumentation mining,” Artificial Intelligence and Law, vol. 19, no. 1, pp. 1–22, 2011.
  • [12] A. Peldszus and M. Stede, “From argument diagrams to argumentation mining in texts: A survey,” International Journal of Cognitive Informatics and Natural Intelligence (IJCINI), vol. 7, no. 1, pp. 1–31, 2013.
  • [13] J. Schneider and A. Wyner, “Identifying consumers’ arguments in text.” in SWAIE, 2012, pp. 31–42.
  • [14] C. Tomasetti and B. Vogelstein, “Variation in cancer risk among tissues can be explained by the number of stem cell divisions,” Science, vol. 347, no. 6217, pp. 78–81, 2015.
  • [15] D. Wodarz and A. G. Zauber, “Cancer: Risk factors and random chances,” Nature, vol. 517, no. 7536, pp. 563–564, 2015.
  • [16] A. Wróblewska, P. Kaplanski, P. Zarzycki, and I. Lugowska, “Semantic rules representation in controlled natural language in FluentEditor,” in Human System Interaction (HSI), 2013 The 6th International Conference on.   IEEE, 2013, pp. 90–96.
  • [17] A. Wyner, W. Peters, and D. Price, “Argument discovery and extraction with the argument workbench,” NAACL HLT 2015, p. 78, 2015.