Experiment Segmentation in Scientific Discourse as Clause-level Structured Prediction using Recurrent Neural Networks

02/17/2017 ∙ by Pradeep Dasigi, et al. ∙ 0

We propose a deep learning model for identifying structure within experiment narratives in scientific literature. We take a sequence labeling approach to this problem, and label clauses within experiment narratives to identify the different parts of the experiment. Our dataset consists of paragraphs taken from open access PubMed papers labeled with rhetorical information as a result of our pilot annotation. Our model is a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) cells that labels clauses. The clause representations are computed by combining word representations using a novel attention mechanism that involves a separate RNN. We compare this model against LSTMs where the input layer has simple or no attention and a feature rich CRF model. Furthermore, we describe how our work could be useful for information extraction from scientific literature.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

An important part of science is communicating results. There are well established rhetorical guidelines (Alley, 1996) for scientific writing that are used across disciplines and consequently, narratives describing evidence within a scientific investigation are expected to have a certain structure. Typically, the description begins with certain background information which has already been proved, followed by some motivating hypotheses to introduce the experiment, the methods inferences made based on those results. Understanding this structure is important since it enables the higher-level construction of the general argument of the paper. The reader assembles the pieces in order to understand what was done, why it was done, what prior knowledge it builds upon and/or refutes, and with what certainty the final conclusions should be accepted. Without such an overall model of the experiment, the reader has nothing but basic assertions.

In this work, our aim is to identify these discourse elements given an experiment narrative. We view the task at hand as a sequence labeling problem: Given a sequence of clauses from a paragraph describing an experiment, we seek to label the clauses with their discourse type. There exist several proposals for experiment discourse models (Liakata, 2010; Nawaz, Thompson, and Ananiadou, 2010; Mizuta and Collier, 2004; Nwogu, 1997). We adopt the discourse type taxonomy for biological papers suggested by De Waard and Pander Maat (2012), and define our problem as identifying the discourse type of each clause in a given experiment description. The taxonomy contains seven types (Table 1) and Figure 1 shows an example paragraph111From Angers-Loustau et al. (1999) “Protein tyrosine phosphatase-PEST regulates focal adhesion disassembly, migration, and cytokinesis in fibroblasts” J. Cell Bio. 144:1019-31 broken down into clauses and tagged with discourse types.

While there has been some variation in the level of granularity of text in prior discourse processing work, for our task the appropriate level of processing is clearly at the clause-level. As shown in Figure 1, there are many sentences in our data containing clauses of different kinds. For example, a pattern we observe frequently is when an author writes “To understand phenomenon X, we performed experiment Y” yielding a ‘goal’ followed by a ‘method’ clause in a single sentence. Using the main and subordinate clauses from a Stanford parse provided good segregation of this structure.

For this work, we focus on Systems Biology (SB) papers concerning signaling pathways in cancer cells. Typically, researchers in this field use a number of small-scale experimental assays to investigate molecular events, see Voit (2012) and Svoboda and Reenstra (2002) for textbook and review introductions. There can easily be as many as 20-30 separate small experiments in any study that each provide evidence for the interpretive assertions being made. Our goal is to partition the text of SB papers to identify small-scale passages that describe the goals, methods, results and implications of each experiment. By convention, subfigures denoted by ‘1A’, ‘1B’, ‘1C’ etc. each describe data from a separate experiment and are directly referenced in the narrative (see Figure 1).

Figure 1: Example of a experiment description tagged with discourse types.

2 Related Work

Identifying structure within scientific papers

There is a significant amount of prior work that is aimed at scientific discourse processing. Teufel and Moens (2002) and Teufel and Moens (1999)

describe argumentative zoning (AZ), a way of classifying scientific papers at a sentence level, into zones, thus extracting the structure from entire papers.

Hirohata et al. (2008) use a 4-way classification scheme for abstracts of scientific papers for identifying objectives, methods, results and conclusions. Liakata (2010)

described a three-layer finer grained annotation scheme for sentence-level annotation (with 11 separate categorical labels) for identifying the core scientific concepts of papers. Classification performance for machine learning systems to automatically tag scientific sentences was F-Score=0.51 for LibSVM classifiers

(Liakata et al., 2012). There is extensive overlap between leaf elements of the CoreSC schema and our simpler discourse type model (‘Hypothesis’, ‘Goal’, ‘Method’, and ‘Result’ are shared between both annotation sets and tags like ‘Background’ and ‘Conclusion’ map to our ’fact’ and ’implication’ tags).

Guo et al. (2010) used SVM and Naïve Bayes classifiers to compare the three schemes described above. Gupta and Manning (2011) also studied the problem of extracting focus, techniques and the domain of research papers to identify the influence of research communities over each other.

In these studies, the focus of research is largely centered on modeling the discourse being used to construct a scientific argument, driving towards understanding “sentiment expressed towards cited work, ownership of ideas, and speech acts which express rhetorical statements typical for scientific argumentation” (Teufel, 2000). These are driven by human-to-human communication processes of the scientific literature rather than using discourse elements to support machine reading of a semantic representation of scientific findings from primary experimental research papers. Our focus is specifically on attempting to identify text pertaining to experimental evidence for scientific IE rather focusing on authors’ interpretations of those findings.

Deep Learning for structured prediction and text classification

There is a great amount of work in classification and structured prediction over text and other modalities, that uses deep learning. Particularly in sequence labeling tasks in text, (Collobert et al., 2011)

words are represented as vectors and used as features to train a tagger. One advantage of using this approach is the reusability of pre-trained word vectors

(Mikolov et al., 2014; Pennington, Socher, and Manning, 2014) as features in various tasks. In our task, the sequences being labeled are clauses instead of words. We obtain vector representations of clauses by summarizing those of words in the clauses.

Attention has been used for complex tasks like question answering (Hermann et al., 2015) and machine translation (Bahdanau, Cho, and Bengio, 2014). In sequence-to-sequence learning problems like machine translation (Bahdanau, Cho, and Bengio, 2014), parsing (Vinyals et al., 2015) and image caption generation (Xu et al., 2015), one network is used to encode the input modality, and a different network to decode into the output modality, with the decoder using attention to learn parts of the input to attend to for generating a given part of the output sequence. While our work does use two different models, one for encoding clause representations as a function of word representations and another for decoding clause labels from clause representations, the two models operate at different granularities.

Comparison with RST based discourse parsing

General domain discourse parsing is a well-studied problem. While there are many discourse theories (see Marcu (2000), chapter 2 for an overview), Rhetorical Structure Theory (RST) by Mann and Thompson (1988), received a lot of attention. It is generally accepted that relations between non-overlapping chunks of text need to be considered to account for the overall meaning (Marcu, 2000). Accordingly, rhetorical relations are central in RST for marking the structure. In contrast, the taxonomy we use applies to the clauses themselves, instead of the relations between them. This is made possible by the specificity of our domain: In the general case, it may not be possible to identify the type of a clause in isolation. However, it has to be noted that the information conveyed by our clause-centric formalism may also be expressed using a relation-centric discourse formalism like RST. Figure 2 shows one possible RST tree for the text shown in Figure 1.

Figure 2: Tree using RST relations for the text in Figure 1. The numbers indicate the clauses from the text shown in Figure 1.
Type Definition
Goal Research goal
Fact A known fact, a statement taken to be true by the author
Result The outcome of an experiment
Hypothesis A claim proposed by the author
Method Experimental method
Problem An unresolved or contradictory issue
Implication An interpretation of the results
Table 1: Seven label taxonomy from De Waard and Pander Maat (2012)

3 Approach

Figure 3: Our Scientific Discourse Tagging (SciDT) pipeline. The input is a list of clauses, which is embedded

to get the tensor

, containing one vector per word. Summarization involves converting into a matrix , that has one vector per clause, which is then passed to a LSTM-RNN for labeling. The lower part of the figure shows the two ways of summarizing the clause (given by ).

We call our system Scientific Discourse Tagger (SciDT). Our pipeline is shown in Figure 3. The input to the tagger is a set of clauses from a paragraph. They are first embedded to obtain a tensor where is the number of clauses , and is the number of words in each clause, and each word is represented as a

dimensional vector. The tensors are zero-padded along the clause and word dimensions dimensions if needed. The next step is to summarize the clause representations to obtain

, a matrix corresponding to the entire paragraph, with one vector per clause. Finally, is fed to the Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) Hochreiter and Schmidhuber (1997) cells, to label the clauses. We propose two ways of summarizing the clause representations below. Both variants use attention to learn the weights of words within a clause based on their importance for the labeling task, and compute a weighted average of the word representations using those weights, to get the clause representation. The attention component and the LSTM-RNN are trained jointly. We use pretrained representations for words and fix them during training.

3.1 Attention with and without context

Both variants take as input the tensor . The output in both cases is a matrix , which contains the attention weights of all the words in the paragraph. We first project the input words to a lower dimensional space in both cases using a projection operator .


The low dimension representations are then scored differently by each variant.

Out of context

This model defines a simple scoring operator , that scores each word based only on its low dimension representation. The scoring is out of context because each word is scored in isolation.


Equation 2 corresponds to selecting all the words in the clause of the paragraph. Equation 3 shows the computation of attention scores for all words in the clause, and equation 4 simply puts the clause scores together to get the paragraph level attention values.

Clause context

In this variant, we score words in a clause in the context of other words that occur in the clause. Concretely, as shown in the equations below, this is a recurrent scoring mechanism that uses a RNN to score each word in a clause as a function of its low-dimension representation and its previous context in the clause given by the hidden layer in the RNN. It has to be noted that the recurrence in this scoring model is over words in a clause while that in the LSTM described previously is over clauses.


Equation 5, equation 7 and equation 8 are similar to equation 2, equation 3 and equation 4 respectively. The operator is similar to from simple attention. In equation 6, we apply the standard RNN recurrence to update the hidden state, using the parameters , operating on the input word at the current timestep , and , operating on the hidden state from the previous timestep .

3.2 Input to LSTM

A weighted sum of the input tensor is computed, with the weights coming from the attention model, and it is fed to the LSTM.


The above equation shows the composed representation of the th clause stored as the th row in .

4 Experiments

4.1 Implementation Details

We used the 200 dimension vectors trained on Pubmed Central data by Pyysalo et al. (2013) as input representations and projected them down to 50d to keep the parameter space of the entire model under control. The projection operator is also trained along with the rest of the pipeline. LSTMs were implemented 222Code is publicly available at https://github.com/edvisees/sciDT

using Keras

(Chollet, 2015)

and attention using Theano

(Bergstra et al., 2010)

. We trained for atmost 100 epochs, while monitoring accuracy on held-out validation data for early stopping. We used ADAM

(Kingma and Ba, 2014) as the optimization algorithm. Dropout with was used on input to the attention layer.

4.2 Data Preprocessing, Annotations and Pipelines

We created a scientific discourse marked dataset from 75 papers in the area of intercellular cancer pathways taken from the Open Access333http://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/ subset of Pubmed Central. Using a multithreaded preprocessing pipeline, we extracted the Results sections of each of those papers, and parsed all the sentences using the Stanford Parser (Socher et al., 2013). This process separated the main and subordinate clauses of each sentence that we process as a sequence over separate paragraphs. We asked domain experts to label each of those clauses using the seven label taxonomy suggested by De Waard and Pander Maat (2012). We also added a None label for those clauses that do not fall under any category. Each sequence in the dataset corresponds to the clauses extracted from a paragraph. So we make the assumption that paragraphs are minimal experiment narratives. On the whole, our dataset444Please contact the authors if you would like to use this dataset for your research. consists of 392 experiment descriptions with a total of 4497 clauses. This is an ongoing annotation effort, and we intend to make a bigger dataset in the future.

Figure 4:

Chart showing probabilities of each dicourse type at various positions in a paragraph. Each paragraph is broken into five parts

Figure 4 shows

values of discourse types at various positions in a paragraph, estimated from the entire annotated dataset. It can be seen that

goal, fact, problem and hypothesis are more likely at the beginning of a paragraph compared to other locations, whereas method peaks before the middle, result at the middle, and implication clearly towards the end of a paragraph. This trend supports the expected narrative structure described in Section 1.

Model Attention Accuracy F-score
CRF - 0.6942 0.6818
SciDT None 0.6912 0.6749
SciDT Simple 0.7379 0.7261
SciDT Recurrent 0.7502 0.7405
Table 2: Accuracies and F-scores from 5-fold cross validation of SciDT in various settings and a CRF. Simple attention corresponds to the out of context variant, and recurrent to the clause context variant.

4.3 Results and Analysis

A baseline model we compare against is a Conditional Random Field (Lafferty, McCallum, and Pereira, 2001)

that uses as features part of speech tags, the identities of verbs and adverbs, presence of figure references and citations, and hand-crafted lexicon features

555For example, words like demonstrate and suggest indicate implication; phrases like data not shown indicate result that indicate specific discourse types. In addition, we also test a variant of our model that does not use attention in the input layer and the clause vectors are obtained simply as an average of the vectors of words in them. Table 2 shows accuracy scores and weighted averages666F-scores are of all classes were averaged weighted by the number of points within that class in the gold standard. of f-scores from 5-fold cross validation of the two baseline models and the two attention based models. The performance of the averaged input SciDT is comparable to the CRF model, whereas the two attention models perform better. The performance of the recurrent attention SciDT model shows the importance of modeling context in attention.

Figure 5: Examples of attention probabilities assigned by the recurrent model to words in parts of clauses and their correctly predicted labels. Darker shades show higher attention weights.

On closer examination of the attention weights assigned to words in unseen paragraphs, we noticed stronger trends in the recurrent attention based model. Particularly, the main verbs of the sentences received the highest attention in many cases in the recurrent model. That the verb form is an important indicator of discourse type identification was shown by De Waard and Pander Maat (2012).

Figure 5 shows examples of parts of clauses and the attention weights assigned to the words. These indicate the general trends of words relevant to discourse classes being given higher attention: investigate whether indicates Goal, analysis is a Method word, and strongly suggest is a phrase expected in Implication. While some of these errors made by the LSTM may be attributed to the model itself, there were also some exceptions to the assumption that clauses are the smallest units of discourse. There are some infrequent cases where clauses had components of multiple discourse types in them. Moreover, the syntactic parser we used to separate clauses was sometimes inaccurate, resulting in incorrect clause splits.

5 Discussion and Conclusion

We introduced a sequence labeling approach for identifying the discourse elements within experiment descriptions. Our model uses an attention mechanism over word representations to obtain clause representations. The results show that our attention based composition mechanism used to encode clauses adds value to the LSTM model. Visualizations show that the clause context model does indeed learn to attend to words important for the final tagging decision. In the future, we shall extend the idea of contextual attention to attend to words based on context at the paragraph level.

Our system can complement existing IE tools that operate on scientific literature, and provide useful epistemic and contextual features. Identifying the structure of experiments provides additional context information that can help various downstream tasks. For event co-reference, one can use the structure to accurately resolve anaphora links. For example, a reference made in an implication statement is likely to some entity in a result statement it follows. It has to be noted that the taxonomy we used also provides epistemic information which is helpful in information extraction (IE): IE systems need not process clauses labeled as hypothesis or goal since they do not contain events that actually occurred. Going forward, our goal is to read, assemble and model mechanisms describing complex phenomena from collections of relevant scientific documents.

The application of our methods can reveal a small-scale discourse structure to contextualize, report and interpret evidence from individual experiments in a fine-grained context. This could be used to support cyclic models of scientific reasoning where data from individual experiments can be placed in an appropriate interpretive context within an informatics system that synthesizes knowledge across many papers. Beyond the scope of direct applications to IE, this work may be applied to Semantic Web representations of scientific knowledge and biocuration pipelines to accelerate knowledge acquisition.


  • Alley (1996) Alley, M. 1996. The craft of scientific writing. Springer Science & Business Media.
  • Bahdanau, Cho, and Bengio (2014) Bahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
  • Bergstra et al. (2010) Bergstra, J.; Breuleux, O.; Bastien, F.; Lamblin, P.; Pascanu, R.; Desjardins, G.; Turian, J.; Warde-Farley, D.; and Bengio, Y. 2010. Theano: A cpu and gpu math compiler in python. In van der Walt, S., and Millman, J., eds., Proceedings of the 9th Python in Science Conference, 3 – 10.
  • Chollet (2015) Chollet, F. 2015. Keras. https://github.com/fchollet/keras.
  • Collobert et al. (2011) Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; and Kuksa, P. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537.
  • De Waard and Pander Maat (2012) De Waard, A., and Pander Maat, H. 2012. Verb form indicates discourse segment type in biological research papers: Experimental evidence. Journal of English for academic purposes 11(4):357–366.
  • Guo et al. (2010) Guo, Y.; Korhonen, A.; Liakata, M.; Karolinska, I. S.; Sun, L.; and Stenius, U. 2010. Identifying the information structure of scientific abstracts: an investigation of three different schemes. In Proceedings of the 2010 Workshop on Biomedical Natural Language Processing, 99–107. Association for Computational Linguistics.
  • Gupta and Manning (2011) Gupta, S., and Manning, C. 2011. Analyzing the dynamics of research by extracting key aspects of scientific papers. In Proceedings of 5th International Joint Conference on Natural Language Processing, 1–9. Chiang Mai, Thailand: Asian Federation of Natural Language Processing.
  • Hermann et al. (2015) Hermann, K. M.; Kocisky, T.; Grefenstette, E.; Espeholt, L.; Kay, W.; Suleyman, M.; and Blunsom, P. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, 1684–1692.
  • Hirohata et al. (2008) Hirohata, K.; Okazaki, N.; Ananiadou, S.; Ishizuka, M.; and Biocentre, M. I. 2008. Identifying sections in scientific abstracts using conditional random fields.
  • Hochreiter and Schmidhuber (1997) Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural computation 9(8):1735–1780.
  • Kingma and Ba (2014) Kingma, D., and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Lafferty, McCallum, and Pereira (2001) Lafferty, J.; McCallum, A.; and Pereira, F. C. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data.
  • Liakata et al. (2012) Liakata, M.; Saha, S.; Dobnik, S.; Batchelor, C.; and Rebholz-Schuhmann, D. 2012. Automatic recognition of conceptualization zones in scientific articles and two life science applications. Bioinformatics (Oxford, England) 28(7):991–1000.
  • Liakata (2010) Liakata, M. 2010. Zones of conceptualisation in scientific papers: a window to negative and speculative statements. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, 1–4. Association for Computational Linguistics.
  • Mann and Thompson (1988) Mann, W. C., and Thompson, S. A. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse 8(3):243–281.
  • Marcu (2000) Marcu, D. 2000. The theory and practice of discourse parsing and summarisation.
  • Mikolov et al. (2014) Mikolov, T.; Chen, K.; Corrado, G.; and Dean, J. 2014. word2vec.
  • Mizuta and Collier (2004) Mizuta, Y., and Collier, N. 2004. An annotation scheme for a rhetorical analysis of biology articles. In LREC, 1737–1740.
  • Nawaz, Thompson, and Ananiadou (2010) Nawaz, R.; Thompson, P.; and Ananiadou, S. 2010. Evaluating a meta-knowledge annotation scheme for bio-events. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, 69–77. Association for Computational Linguistics.
  • Nwogu (1997) Nwogu, K. N. 1997. The medical research paper: Structure and functions. English for specific purposes 16(2):119–138.
  • Pennington, Socher, and Manning (2014) Pennington, J.; Socher, R.; and Manning, C. D. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, 1532–43.
  • Pyysalo et al. (2013) Pyysalo, S.; Ginter, F.; Moen, H.; Salakoski, T.; and Ananiadou, S. 2013. Distributional semantics resources for biomedical text processing. Proceedings of Languages in Biology and Medicine.
  • Socher et al. (2013) Socher, R.; Bauer, J.; Manning, C. D.; and Ng, A. Y. 2013. Parsing with compositional vector grammars. In In Proceedings of the ACL conference. Citeseer.
  • Svoboda and Reenstra (2002) Svoboda, K. K. H., and Reenstra, W. R. 2002. Approaches to studying cellular signaling: a primer for morphologists. The Anatomical record 269(2):123–139.
  • Teufel and Moens (1999) Teufel, S., and Moens, M. 1999. Discourse-level argumentation in scientific articles: human and automatic annotation. In Towards Standards and Tools for Discourse Tagging, 84–93.
  • Teufel and Moens (2002) Teufel, S., and Moens, M. 2002. Summarizing scientific articles: experiments with relevance and rhetorical status. Computational linguistics 28(4):409–445.
  • Teufel (2000) Teufel, S. 2000. Argumentative Zoning: Information Extraction from Scientific Text. Ph.D. Dissertation, School of Cognitive Science, University of Edinburgh, Edinburg.
  • Vinyals et al. (2015) Vinyals, O.; Kaiser, Ł.; Koo, T.; Petrov, S.; Sutskever, I.; and Hinton, G. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems, 2755–2763.
  • Voit (2012) Voit, E. 2012. A First Course in Systems Biology. Garland Science, 1st edition.
  • Xu et al. (2015) Xu, K.; Ba, J.; Kiros, R.; Courville, A.; Salakhutdinov, R.; Zemel, R.; and Bengio, Y. 2015. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044.