Annotating and Extracting Synthesis Process of All-Solid-State Batteries from Scientific Literature

02/18/2020 ∙ by Fusataka Kuniyoshi, et al. ∙ 0

The synthesis process is essential for achieving computational experiment design in the field of inorganic materials chemistry. In this work, we present a novel corpus of the synthesis process for all-solid-state batteries and an automated machine reading system for extracting the synthesis processes buried in the scientific literature. We define the representation of the synthesis processes using flow graphs, and create a corpus from the experimental sections of 243 papers. The automated machine-reading system is developed by a deep learning-based sequence tagger and simple heuristic rule-based relation extractor. Our experimental results demonstrate that the sequence tagger with the optimal setting can detect the entities with a macro-averaged F1 score of 0.826, while the rule-based relation extractor can achieve high performance with a macro-averaged F1 score of 0.887.



There are no comments yet.


page 3

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the rapid progress in the field of inorganic materials, such as the development of all-solid-state batteries (ASSBs) and solar cells, several materials researchers have noted the importance of reducing the overall discovery and development time by means of computational experiment design, using the knowledge of published scientific literature [2, 8, 37]. To achieve this, automated machine reading systems that can comprehensively investigate the synthesis process buried in the scientific literature is necessary.

In the field of organic chemistry, a corpus has been proposed in which chemical substances, drug names, and their relations are structurally annotated in documents such as papers, patents, and medical documents, while composition names are provided in the abstracts of molecular biology papers [18]. Linguistic resources are available in abundance, such as the GENIA corpus [16] of biomedical events on biomedical texts and the annotated corpus [19] of liquid-phase experimental processes on biological papers. In biomedical text mining, the detection of semantic relations is actively researched as a central task [22, 30, 6, 29, 28, 7]. However, the relations in biomedical text mining represent the cause and effect of a physical phenomenon among two or more biochemical reactions, which differs from the procedure of synthesizing materials.

In the field of inorganic chemistry, only several corpora have been proposed in recent years. A general-purpose corpus of material synthesis has been built for inorganic material by aligning the phrases extracted by a trained sequence-tagging model [17]. However, this corpus did not include relations between operations, and therefore, it was difficult to extract the step-by-step synthesis process. While, an annotated corpus has been created with relations between operations for synthesis processes of general materials such as solar cell and thermoelectric materials [24]. However, the synthesis processes of ASSBs are hardly included even though the operations, operation sequences, and conditions also have differences due to the characteristics of the synthesis process for each material category.

In this study, we took the first step towards developing a framework for extracting synthesis processes of ASSBs. We designed our annotation scheme to treat a synthesis process as a synthesis flow graph, and performed annotation on the experimental sections of 243 papers on the synthesis process of ASSBs. The reliability of our corpus was evaluated by calculating the inter-annotator agreements. We also propose an automatic synthesis process extraction framework for our corpus by combining a deep learning-based sequence tagger and simple heuristic rule-based relation extractor. A web application of our synthesis process extraction framework is available on our project page 111 We hope that our work will aid in the challenging domain of scholarly text mining in inorganic materials science.

The contributions of our study are summarized as follows:

  • We designed and built a novel corpus on synthesis processes of ASSBs named SynthASSBs, which annotates a synthesis process as a flow graph and consists of 243 papers.

  • We propose an automatic synthesis process extraction framework by combining a deep learning-based sequence tagger and rule-based relation extractor. The sequence tagger with the best setting detects the entities with a macro-averaged F1 score of 0.826 and the rule-based relation extractor achieves high performance with a macro-averaged F1 score of 0.887 in macro F-score.

2 Annotated Corpus

In this section, we present an overview of our annotation schema and annotated corpus, which we named the SynthASSBs corpus.

2.1 Synthesis Graph Representation

We used flow graphs to represent the step-by-step operations with their corresponding materials in the synthesis processes. Using the flow graphs, it was expected that we could represent links that are not explicitly mentioned in text. In the inorganic materials field, there is a representation of the synthesis process using a flow graph and the definition of annotation labels in experimental paragraphs [14]. In our annotation scheme, we followed their definition, with three improvements: (1) the property of an operation is treated as a single phrase, and not as a combination of numbers and units, (2) each label has been modified to capture the conditions necessary to synthesize the ASSB, and (3) a relation label for coreferent phrases is included to understand the anaphoric relations. A flow graph for the ASSB synthesis process is represented by a directed acyclic graph , where is a set of vertices and is a set of edges. We provide an example section of the paper [4] in Figure 1, and the graph extracted from the sentences in the section in Figure 2.

2.2 Label Set

The labels contain vertices and edges for the synthesis graph representation in Section 2.2.1, and Section 2.2.2, respectively.

2.2.1 Vertices

The pure LiTiO material, denoted LTO, was obtained from LiCO (99.99 %, Aladdin) and anatase TiO (99.8 %; Aladdin) precursors, mixed, respectively, in a 4:5 molar ratio of Li:Ti. The precursors, dispersed in deionized water, were ball-milled for 4 h at a grinding speed of 350 rpm, and then calcined at 800 C for 12 h after drying.

Figure 1: Example of a synthesis process. The underlined phrases relate to the material synthesis process.
Figure 2: Example of the synthesis graph generated from Figure 1.
Figure 3: Screenshot of brat interface annotating synthesis process in Figure 1.

The following vertex labels were defined to annotate spans of text, which correspond to vertices in the synthesis graph. The labels represent the materials, operations, and properties. For the material labels, we labeled all phrases that represent materials in the text, while operation and property labels were added only to those phrases related to the synthesis process. We segmented the roles of materials into categories. Moreover, we introduced multiple property types for analyzing the structure of the synthesis process. The defined labels and their examples mostly taken from Fig. 1 are explained in the following.

Material-Start is a raw material used to synthesize the final material; for example, LiCO or TiO.

Material-Intermedium indicates an intermediate material produced during the synthesis process; for example, “then, LixMoO was obtained from a mixture of InMoO and LiI.”

Material-Final represents the final material (or products) of the material synthesis process; for example, LiTiO.

Material-Solvent is liquid that is used to dissolve substances and create solutions; for example, deionized water, ethanol, or methanol.

Material-Others represents other materials that are not related to the synthesis process, such as compounds for thin films or catalysts; for example, “… and then purified with activated carbon and acid alumina.”

Operation represents an individual action performed by the experimenters. It is often represented by verbs; for example, “… were ball-milled for 4 h …”

Property-Time represents a time condition associated with an operation; for example, “… were ball-milled for 4 h…”

Property-Temp represents a temperature condition associated with an operation; for example, “… and then calcined at 800 C …”

Property-Rot indicates a rotational speed condition associated with an operation; for example, “… at a grinding speed of 350 rpm …”

Property-Press represents a pressure condition associated with an operation; for example, “The powder was uniaxially cold pressed at 300 MPa.”

Property-Atmosphere represents an atmosphere condition associated with an operation; for example, “… was conducted in Ar atmosphere for 3 h.”

Property-Others represents other conditions associated with an operation or the manufacturer names and purity associated with a material; for example, “MgO (purity 99.999%),” “… pressed into pellets (10 mm diameter, 1 mm thick),” and “the starting materials in the 1/4 molar ratio.”

2.2.2 Edges

We defined the following three edge labels, which represent the relations between vertices.

Condition indicates the conditions of an operation and properties of a raw material (for example, the temperature, time, and atmosphere) for performing an operation. This label is also used to express the relations between a raw material and its manufacturer name or purity.

Next represents the order of an operation sequence and indicates the input or output relations between a material and an operation.

Coreference is a link that associates two or more phrases when these phrases refer to the same material.

3 Annotation Details and Evaluation

In this section, we explain the annotation details, including the text preparation, preprocessing, and annotation settings; thereafter we present the settings and results of the inter-annotator agreement experiments.

3.1 Annotation Details

We constructed a corpus including the experimental sections of 243 papers on material synthesis processes in the following manner.

We collected papers on experimental processes from online journals. To limit the annotation target to the ASSB, which is synthesized using the “solid phase method” or “liquid phase method”. We set the search queries to identify papers containing “solid electrolyte” or “ionic conductivity”, but not containing “poly”, “SEI”, and “solid electrolyte interphase” in the titles, abstracts, and keywords. The four experts in material science are involved in the choice of the paper journal source and selecting keywords.

Thereafter, we manually selected 243 papers that were confirmed to include the synthesis process in the “Experimental”, “Preparation” or “Method” sections, because synthesis processes often appear in these sections. We applied the PDF Parser222 to extract text from the downloaded PDF papers. We extracted the texts of the above sections, manually corrected several typos, and unified certain orthographical variants in composition formulae and quantitative expressions. For example, a “C” was replaced with the token “degC”.

Finally, we annotated the synthesis graph on the obtained texts. Three annotators, who were master’s course students in materials science, were involved in the annotation. Annotator A tagged 77 papers, annotator B tagged 68 papers, and annotator C tagged 98 papers. Finally, one professional in materials science verified the annotations of the three student annotators and corrected the annotation errors. We used the brat annotation toolkit [32] for manual annotation. Figure 3 illustrates an annotation interface by brat.

3.2 Inter-Annotator Agreement

The agreement calculations were based on whether the spans of the labels were precisely matched the three annotators in materials science on the spans by using 30 randomly selected synthesis processes from the SynthASSBs corpus. We calculated the agreements using Cohen’s kappa. For each pair of two annotators selected from the three annotators A, B, and C, the agreement score was calculated by regarding the labels identified by one annotator as gold and the labels by the other annotator as the prediction, and the average of the scores in two directions was determined. For the vertices, we calculated two agreement scores: the agreement score of the spans and types (All), and the agreement score of the types on the spans that were annotated by both annotators (Type). For the edges, we also calculated two agreement scores on the vertices that were annotated by both annotators. One score was calculated by comparing the existence of edges and their types (All), while the other score was calculated by comparing the types on the edges that were annotated by both annotators (Type). The inter-annotator agreement results are presented in Table 1.

Vertices Edges
Annotators All Type All Type
A–B 0.637 1.000 0.705 0.990
B–C 0.667 1.000 0.671 0.991
A–C 0.608 1.000 0.651 0.990
Table 1: Inter-annotator agreement results using Cohen’s kappa.

We confirmed that the types (Type) of vertices and edges were almost perfectly matched among the annotators (both kappa coefficients were over 0.99) and the spans and types (All) of them were also substantially matched. This demonstrates that the annotation scheme of the vertices and edges was clear when selecting types. However, the kappa coefficients in the All settings were lower than those in the Type settings. This indicates that annotation ambiguity was caused when deciding which phrase should be involved in the synthesis process. We leave the improvements in the annotation guidelines to reduce this ambiguity problem for future work.

3.3 Statistics

Several key statistics of the SynthASSBs corpus, such as the number of documents, sentences, tokens, and entities, are summarized in Table 2. The number of vertices or edges per type is indicated in Table 3. In the statistics, we used scispaCy [26] 333 to split sentences, perform tokenization and extract entities.

Item Count
Documents 243
Sentences 2,877
Tokens 46,477
Entities 10,995
Vertex types 12
Edge types 3
Avg. sentences/document 12
Avg. tokens/document 191
Avg. entities/document 45
Table 2: SynthASSBs corpus statistics.
Vertex / Edge types Count
Material 2,749
Material-Start 1,319
Material-Intermedium 138
Material-Final 532
Material-Solvent 212
Material-Others 548
Operation 1,680
Property 3,994
Property-Temp 704
Property-Time 642
Property-Rot 66
Property-Press 81
Property-Atmosphere 275
Property-Others 2,226
Condition 4,139
Next 3,018
Coreference 759
TOTAL 23,082
Table 3: Statistics of vertices and edges annotated in SynthASSBs corpus.

4 Synthesis Process Extraction

Our framework performed extraction of synthesis processes in a pipeline manner, using two modules: deep learning-based sequence taggers for extracting the phrases we defined as vertices, and a rule-based relation extractor (RE) for connecting the edges that were pairs of extracted phrases. As illustrated in Figure 4, our framework first performed sequence tagging (a) to extract the phrases related to the material synthesis process. Thereafter, the relations between entities were extracted by the rule-based RE (b).

Figure 4: Overview of synthesis process extraction. The red phrases and circles indicate terms related to materials, green indicates operations, and yellow indicates properties. The solid and broken arrows represent the next and condition edges, respectively.

4.1 Sequence Tagging

To train the sequence-tagging model, we employed Bi-directional Long Short-Term Memory with Conditional Random Fields  


as a sequence-tagging model to identify the spans of the vertices. We used six different word representations in the neural network-based sequence tagger: character-level embedding (CE) 

[38]; byte pair encoding (BPE) [31]; word embeddings for inorganic material science Mat-WE [15] and mat2vec [35]; Mat-ELMo [15], which is an embeddings from language models (ELMo) [27] model pretrained on materials science texts; and SciBERT [5], which is a bidirectional encoder representations from transformers (BERT) model [10], pretrained on biomedical and computer science texts. These representations were fine-tuned during training on the sequence-tagging task.

4.2 Relation Extraction

We developed the following five rules using the training portion of the SynthASSBs corpus. The illustrations following the rule descriptions are used for visualization. The circles used in the figures represent sequential tokens; the red, green, yellow, and white circles corresponds to Material, Operation, Property, and other words/phrases, respectively. A bounding box around circles represents a sentence. A solid arrow represents an edge of Next, while a broken arrow represents an edge of Condition.

Rule of Operation to Operation (O-O):

An Operation phrase is connected to the next Operation phrase in the same sentence or in the next sentences.

Figure 5: Illustration of O-O.

Rule of Material to Operation (M-O):

When an Operation phrase appears in brackets, a Material-Start or Material-Solvent phrase before the left bracket is connected to the Operation. In the example sentence “Samples were prepared from HBO, ALO, SiO and either LiCO (dried at 200 degC),” the Operation phrase “dried” written in brackets is connected to its previous Material-Start phrase, “LiCO”, not “HBO”, “ALO”, and “SiO”.

For other Material-Start or Material-Solvent phrases, we applied the following rules, ignoring the Operation phrases in brackets. A Material-Start or Material-Solvent phrase is connected to its closest Operation phrase in a sentence. If two candidates exist within the same distance, the previous candidate is selected. If no Operation phrase exists in a sentence, the phrase is connected to the next-closest Operation phrase beyond the sentence boundaries.

Figure 6: Illustration of M-O.

Rule of Operation to Material (O-M):

An Operation phrase that appears at the end of the operation sequence is connected to all Material-Final phrases in the text.

Figure 7: Illustration of O-M.

Rule of Property-Others to Operation or Material (Po-OM):

When a Property-Others phrase appears in brackets, the phrase is connected to the closet previous Material-Start phrase. In the example phrase “TiO, GeO and NHHPO (purity 99.999 %),” “purity 99.999 %” is connected to its closest previous Material-Start phrase, namely “NHHPO”, and not “TiO” and “GeO”.

A Property-Others phrase is connected to the closest phrase of Material-Start, Material-Final, Material-Intermedium, Material-Solvent, Material-Others, or Operation. If two candidates exist with the same distance, the previous candidate is selected.

Figure 8: Illustration of Po-OM.

Rule of Property to Operation (P-O):

A Property-Time, Property-Temp, Property-Rot, Property-Press, or Property-Atmosphere (that is, properties other than Property-Others) phrase is connected to its closest previous Operation phrase in the sentence or before it.

Figure 9: Illustration of P-O.

5 Evaluation

Material Operation Property ALL
Model F1 P R F1 P R F1 P R F1
CE [38] 0.686 0.644 0.733 0.741 0.779 0.708 0.571 0.673 0.496 0.666
BPE [31] 0.860 0.837 0.883 0.799 0.785 0.814 0.706 0.726 0.686 0.788
mat2vec [35] 0.841 0.826 0.858 0.804 0.742 0.877 0.697 0.727 0.668 0.781
Mat-WE [15] 0.834 0.816 0.854 0.797 0.769 0.827 0.702 0.703 0.701 0.778
Mat-ELMo [15] 0.917 0.897 0.938 0.823 0.768 0.887 0.739 0.761 0.718 0.826
SciBERT [5] 0.879 0.866 0.893 0.839 0.798 0.884 0.709 0.749 0.673 0.809
Table 4: F1 scores of sequence-labeling models with different base representations on development dataset. Macro-averaged F1 scores were calculated using all three coarse-grained types (ALL). The highest and second-highest for each metric are indicated in bold and underline, respectively.

5.1 Evaluation Settings

We evaluated the sequence tagger and rule-based RE individually. The sequence tagger was implemented using Flair [3]444

, which is a multi-lingual, neural sequence-labeling framework for state-of-the-art natural language processing. When training the sequence tagger, we set the number of training epochs to 200, and used the default hyper-parameters of Flair.

The sequence tagger was evaluated using two settings of type sets. In the first setting, we extracted three coarse-grained distinct types of vertices in the flow graph: the Material, Operation, and Property vertices. In the second setting, we extracted all 12 fine-grained types of vertices in the flow graph: Material-Start, Material-Intermedium, Material-Final, Material-Solvent, Material-Others, Operation, Property-Time, Property-Temp, Property-Rot, Property-Press, Property-Atmosphere, and Property-Others.

We divided the SynthASSBs corpus into three subsets: 145 sections for training, 49 for development, and 49 for testing. We used an F1 score as the primary evaluation metric. We also report the macro-averaged F1 score of the three coarse-grained types (ALL) for the first setting and the micro-averaged F1 scores for the three coarse-grained types (

Material, Operation, and Property) and the macro-averaged F1 score of these three types (ALL) for the second setting.

We also plot the changes in F1 score of the methods as the training set is increased in increments of 5% to answer the question about whether the corpus size is large enough to train the sequence tagging. The evaluation was performed on the fine-grained types and the scores were calculated on the development set. We show the micro-averaged F1 scores for the three coarse-grained types and the macro-averaged F1 score of the three types (ALL) for the plot.

For the rule-based RE, we used 145 sections (used for training in sequence tagging) for designing rules which details showed in Sec. 4.2, and 98 sections (used as development and testing in sequence tagging) for evaluating the rules. To evaluate the RE, an F1 score based on an exact match was used as the primary evaluation metric. We used Coreference relations in the evaluation: phrase pairs with Coreference relations were treated as the same phrase in the RE evaluation. The performance of the rule-based RE was further analyzed in detail by evaluating the efficiency of the fine-grained labels in the entities as the ablation study, and by demonstrating the accuracy and coverage of each rule.

5.2 Sequence-Tagging Results

Table 4 summarizes the sequence-labeling results for extracting three coarse-grained vertex types over the six word representations as shown in Sec. 4.1. The results show reasonably high performance, in which Mat-ELMo achieved the highest performances, with an F1 score of 0.917 on Material, and 0.826 on ALL, while SciBERT achieved the best score on Operation.

The performance of the sequence tagger with Mat-ELMo, evaluated on the fine-grained types, is presented in Table 5. Among the Material types, Material-Start achieved the highest F1 score of 0.887. The F1 score of Operation was 0.821, which was higher than the average. Among the Property types, Property-Time achieved the highest F1 score of 0.928. However, the F1 score of Material-Intermedium was 0.105 in the sequence tagging, which were 11 phrases incorrectly detected as Material-Start, and 21 phrases as Material-Final out of the 36 phrases. This may be because it is difficult to extract Material-Intermedium without understanding the whole structure of the synthesis process.

Types F1 P R
Material 0.661 0.692 0.665
Material-Start 0.887 0.885 0.888
Material-Intermedium 0.105 0.286 0.065
Material-Final 0.675 0.591 0.786
Material-Solvent 0.793 0.852 0.742
Material-Others 0.845 0.845 0.845
Operation 0.821 0.792 0.852
Property 0.780 0.778 0.784
Property-Temp 0.842 0.806 0.880
Property-Time 0.928 0.932 0.925
Property-Rot 0.889 0.857 0.923
Property-Press 0.605 0.619 0.591
Property-Atmosphere 0.775 0.775 0.775
Property-Others 0.641 0.676 0.609
ALL 0.754 0.754 0.767
Table 5: F1 scores of the sequence labeling models with Mat-ELMo on the development dataset by type. Material and Property indicate macro-averaged F1 scores of each fine-grained types, respectively. ALL is the macro-averaged scores of three coarse-grained types (i.e., Material, Operation, and Property in this table).

Changes in F1 score according to training set size are presented in Figure 10. In this result, we observe that the curves of ALL remain almost flat after using around 20% of the training set is used. Therefore, we conclude that the size of the SynthASSBs corpus is large enough to train the sequence tagger. In detail, for Material, the F1 score gradually increases as the training set size increases because material phrases often include unknown terms. Operation’s performance is flat after 5% of the training set is used because there are several types of Operation verbs used in the synthesis process. Because Property is also steady state when 20% or more of the training set is used, it seems that the properties are described in a regular manner.

Figure 10: Changes in F1 score according to training set size which is increased in increments of 5 % until training set size reaches 145 sections. ALL shows the macro-averaged F1 score of the three coarse-grained types.

5.3 Relation Extraction Results

Table 6 displays the results of the rule-based model as well as the rule-based RE results obtained by the ablation tests. The high performance with a macro-averaged F1 score of 0.887 shows the effectiveness of the rules. To confirm the effectiveness of the fine-grained types or sub-labels, we compared the F1 score with three settings. In the first setting, we extracted the relations without using material sub-labels (– Material-), by applying the rule of M-O to all of the Material types and ignoring the rule of O-M. In the second setting, we extracted relations without using Property sub-labels (– Property-), by applying the rule of Po-OM to all of the Property types and ignoring the rule of P-O. The final setting was without either Material or Property sub-labels (– both sub-levels). According to the ablation tests, the F1 scores were improved by 7.8% on Condition and 11.1% on Next when applying the sub-label rules.

To analyze the effects of the rules in further detail, the coverage and accuracy for each rule were determined, and these are presented in Table 7. By comparing the rule coverage and accuracy, it could be observed that the rules of Po-OM and P-O, which exhibited wide coverage and high accuracy (over 25% and 85%, respectively), contributed significantly to the extraction performance. This indicates that the rules for extracting the relation between the Property and Material or Operation successfully mimicked the manner of reading a paper. Although the coverage of the rule O-M was extremely low and the accuracy was relatively low (4.6% and 48.9%, respectively), this rule was essential for constructing the synthesis graph and could not be omitted.

Condition Next ALL
Rule-based RE 0.914 0.860 0.887
Material- 0.914 0.749 0.832
Property- 0.836 0.860 0.848
– both sub-levels 0.836 0.749 0.793
Table 6:

F1 scores of rule-based system and ablation test results. Macro-averaged F1 scores were calculated using

Condition and Next (ALL).
Rule Coverage Accuracy
O-O 0.219 0.811
M-O 0.160 0.811
O-M 0.046 0.489
Po-OM 0.322 0.853
P-O 0.254 0.951
Table 7: Coverage and accuracy of our rules applied to training data.

6 Qualitative Evaluation

We present a thorough evaluation on a real-world scientific literature to demonstrate the efficacy of our framework. A prediction obtained by our framework and the synthesis graph are shown in Figure 11 and Figure 12, respectively. In this result, our framework could extract phrases related to material synthesis almost without error. In detail, the relations across the sentences were extracted without problems; for example, our framework created a Next edge between “mixed” written in the first sentence and “dispersed” written in the second sentence. Moreover, our framework succeeded in identifying the type of Material even if material written in an abbreviation form; for example, our framework could detect that “LiTiO” and “LTO” are Material-Final in the first sentence. However, the label type was wrong in “anatase” in the first sentence, and Operation connection between “calcined” and “drying” on the second line was different from that labeled by the annotator. This is because our rule-based RE could not understand the meaning of “after drying”.

Figure 11: Synthesis process extraction results from the text in Figure 1
Figure 12: Synthesis graph of the extracted synthesis process in Figure 11.

7 Error Analysis

We analyzed 135 errors in the sequence-tagging results. The over-detection errors constituted 49 cases, which were often Property-Others types that were not directly related to the synthesis process; for example, vessel size or thickness, and milling machine properties. A total of 49 entities were missing and were often caused by Property types, were missing due to rare adverbs, adjectives, or units; for example, “naturally”, “constant”, “mm-thick”, and, “micrometers”.

In the RE, we identified two major problems when we analyzed the 129 errors. The first problem was caused by the definition of the distance, which used the number of words and ignored syntactic structures. For example, in the sentence “LiNO3 were weighed according to the stoichiometry of the Li3xLa2/3-xTiO3 and dissolved in ethylene,” our distance-based rule predicted that “Li3xLa2/3-xTiO3” qualifies “dissolved” instead of “weighed”. This type of problem included 73 errors. The second problem was complex operation sequences. Where two or more material synthesis processes were described in one document, there were cases in which a synthesis process indicated at the beginning was omitted in the second and subsequent explanations. In such cases, branching and merging of synthesis processes occurred. Our rules assumed that the operation sequence was described sequentially, so they could not identify these processes. This type of complex operation sequence caused 28 errors. One means of addressing the above problems is to incorporate additional rules; however, it is not realistic to create more rules manually, because the descriptions are sometimes ambiguous, without an understanding of the contents. We are considering developing a deep learning-based extractor that can take syntactic structures into account.

8 Related Work

Process extraction from procedure texts has been studied in a wide range of fields. Such studies include an effort to extract liquid mixing procedures from text [20], an annotated corpus of photosynthesis and formation erosion processes [9], the extraction of response processes from guidance texts at the time of disaster occurrence [11], and several attempts to structure and extract a series of cooking-related actions, such as baking and boiling, from cooking recipe sentences [23, 13, 21, 1]. Numerous language resources exist in the organic chemistry field [16, 18, 36, 19, 34], which have been annotated with the experimental processes that appear in the papers. Moreover, an attempt has been made to extract processes by applying event extraction methods to realize machine-based text reading for biomedical papers [22, 30, 6, 29, 28, 7]. In the inorganic chemistry field, several corpus are available for general-purpose materials [24, 17], while some studies are underway to extract the synthesis process from papers [25, 33]; however, no corpus and extraction system exist for synthesizing ASSBs. Therefore, we have presented a domain specific corpus of the synthesis process for ASSBs, and an automated machine-reading system for extracting the synthesis processes buried in the scientific literature.

9 Conclusion

This study has addressed the problem of the lack of labeled data, which is a major bottleneck in developing ASSBs. We constructed the novel SynthASSBs corpus, consisting of the experimental sections of 243 papers. The corpus annotates synthesis graphs that represent the synthesis process of ASSBs in text. Moreover, we proposed an automatic synthesis process extraction framework using our corpus by combining a deep learning-based sequence tagger and rule-based relation extractor that mimics the experience in human reading. As a result, the sequence tagger with the best setting can detect the entities with a macro-averaged F1 score of 0.826. Furthermore, the rule-based RE demonstrates high performance with a macro-averaged F1 score of 0.887.

In future work, we will develop a deep learning-based relation extractor that incorporates syntactic information into the model to improve the extraction performance. We will also apply our extracting framework to existing papers, and, using the extracted abundant knowledge, we will build a computational synthesis design framework for discovering novel material.


  • [1] O. Abend, S. B. Cohen, and M. Steedman (2015) Lexical event ordering with an edge-factored model. In Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL), Cited by: §8.
  • [2] A. Agrawal and A. Choudhary (2016) Perspective: materials informatics and big data: realization of the “fourth paradigm” of science in materials science. APL Materials. Cited by: §1.
  • [3] A. Akbik, T. Bergmann, D. Blythe, K. Rasul, S. Schweter, and R. Vollgraf (2019) FLAIR: an easy-to-use framework for state-of-the-art nlp. In Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL Demonstrations), Cited by: §5.1.
  • [4] X. Bai, W. Li, A. Wei, X. Li, L. Zhang, and Z. Liu (2016) Preparation and electrochemical properties of mg2+ and f- co-doped li4ti5o12 anode material for use in the lithium-ion batteries. Electrochimica Acta. Cited by: §2.1.
  • [5] I. Beltagy, K. Lo, and A. Cohan (2019) SciBERT: a pretrained language model for scientific text. In International Joint Conference on Natural Language Processing (IJCNLP), Cited by: §4.1, Table 4.
  • [6] J. Berant, V. Srikumar, P. Chen, A. V. Linden, B. Harding, B. Huang, P. Clark, and C. D. Manning (2014) Modeling biological processes for reading comprehension. In Empirical Methods in Natural Language Processing (EMNLP), Cited by: §1, §8.
  • [7] J. Björne and T. Salakoski (2018)

    Biomedical event extraction using convolutional neural networks and dependency parsing

    In Biomedical Natural Language Processing Workshop (BioNLP) in ACL 2018, Cited by: §1, §8.
  • [8] K. T. Butler, D. W. Davies, H. M. Cartwright, O. Isayev, and A. Walsh (2018) Machine learning for molecular and materials science. Nature. Cited by: §1.
  • [9] B. Dalvi, L. Huang, N. Tandon, W. Yih, and P. E. Clark (2018) Tracking state changes in procedural text: a challenge dataset and models for process paragraph comprehension. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Cited by: §8.
  • [10] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Cited by: §4.1.
  • [11] W. Guo, Q. Zeng, H. Duan, G. Yuan, W. Ni, and C. Liu (2018) Automatic extraction of emergency response process models from chinese plans. IEEE Access. Cited by: §8.
  • [12] Z. Huang, W. L. Xu, and K. Yu (2015) Bidirectional lstm-crf models for sequence tagging. In arXiv:1508.01991. Cited by: §4.1.
  • [13] C. Kiddon, G. T. Ponnuraj, L. S. Zettlemoyer, and Y. Choi (2015) Mise en place: unsupervised interpretation of instructional recipes. In Empirical Methods in Natural Language Processing (EMNLP), Cited by: §8.
  • [14] E. Kim, K. Huang, O. Kononova, G. Ceder, and E. Olivetti (2019) Distilling a materials synthesis ontology. Matter. Cited by: §2.1.
  • [15] E. Y. Kim, K. Huang, A. Tomala, S. L. Matthews, E. Strubell, A. R. Saunders, A. L. McCallum, and E. Olivetti (2017) Machine-learned and codified synthesis parameters of oxide materials. In Scientific data, Cited by: §4.1, Table 4.
  • [16] J. Kim, T. Ohta, Y. Tateisi, and J. Tsujii (2003) GENIA corpus - a semantically annotated corpus for bio-textmining. Bioinformatics. Cited by: §1, §8.
  • [17] O. Kononova, H. Huo, T. He, Z. Rong, T. Botari, W. Sun, V. Tshitoyan, and G. Ceder (2019) Text-mined dataset of inorganic materials synthesis recipes. In Scientific Data, Cited by: §1, §8.
  • [18] M. Krallinger, O. Rabal, F. Leitner, M. Vazquez, and D. S. et al. (2015) The chemdner corpus of chemicals and drugs and its annotation principles. Journal of Cheminformatics. Cited by: §1, §8.
  • [19] C. Kulkarni, W. Xu, A. Ritter, and R. Machiraju (2018) An annotated corpus for machine reading of instructions in wet lab protocols. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Cited by: §1, §8.
  • [20] R. Long, P. Pasupat, and P. Liang (2016) Simpler context-dependent logical forms via model projections. In Association for Computational Linguistics (ACL), Cited by: §8.
  • [21] H. Maeta, T. Sasada, and S. Mori (2015) A framework for procedural text understanding. In International Conference on Parsing Technologies (IWPT), Cited by: §8.
  • [22] M. Miwa, P. Thompson, and S. Ananiadou (2012) Boosting automatic event extraction from the literature using domain adaptation and coreference resolution. Bioinformatics. Cited by: §1, §8.
  • [23] S. Mori, H. Maeta, Y. Yamakata, and T. Sasada (2014) Flow graph corpus from recipe texts. In International Conference on Language Resources and Evaluation (LREC), Cited by: §8.
  • [24] S. Mysore, Z. Jensen, E. Kim, K. Huang, H. Chang, E. Strubell, J. Flanigan, A. McCallum, and E. Olivetti (2019) The materials science procedural text corpus: annotating materials synthesis procedures with shallow semantic structures. In Proceedings of the 13th Linguistic Annotation Workshop (LAW) at ACL 2019, Cited by: §1, §8.
  • [25] S. Mysore, E. Kim, E. Strubell, A. Liu, H. Chang, S. Kompella, K. Huang, A. McCallum, and E. Olivetti (2017) Automatically extracting action graphs from materials science synthesis procedures. Workshop on Machine Learning for Molecules and Materials in NeurIPS 2017. Cited by: §8.
  • [26] M. Neumann, D. King, I. Beltagy, and W. Ammar (2019) ScispaCy: fast and robust models for biomedical natural language processing. In Biomedical Natural Language Processing Workshop (BioNLP) in ACL 2019, Cited by: §3.3.
  • [27] M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer (2018) Deep contextualized word representations. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Cited by: §4.1.
  • [28] P. V. S. S. Rahul, S. K. Sahu, and A. Anand (2017)

    Biomedical event trigger identification using bidirectional recurrent neural network based models

    In Biomedical Natural Language Processing Workshop (BioNLP) in ACL 2017, Cited by: §1, §8.
  • [29] S. Rao, D. Marcu, K. Knight, and H. Daumé (2017) Biomedical event extraction using abstract meaning representation. In Biomedical Natural Language Processing Workshop (BioNLP) in ACL 2017, Cited by: §1, §8.
  • [30] A. T. Scaria, J. Berant, M. Wang, P. Clark, J. Lewis, B. Harding, and C. D. Manning (2013) Learning biological processes with global constraints. In Empirical Methods in Natural Language Processing (EMNLP), Cited by: §1, §8.
  • [31] R. Sennrich, B. Haddow, and A. Birch (2016) Neural machine translation of rare words with subword units. In Association for Computational Linguistics (ACL), Cited by: §4.1, Table 4.
  • [32] P. Stenetorp, S. Pyysalo, G. Topic, T. Ohta, S. Ananiadou, and J. Tsujii (2012) Brat: a web-based tool for nlp-assisted text annotation. In European Chapter of the Association for Computational Linguistics (EACL), Cited by: §3.1.
  • [33] R. Tamari, H. Shindo, D. Shahaf, and Y. Matsumoto (2019) Playing by the book: an interactive game approach for action graph extraction from text.. In Workshop on extracting structured knowledge from scientific publications in NAACL 2019, Cited by: §8.
  • [34] K. Tanaka, T. Iwakura, Y. Koyanagi, N. Ikeda, H. Shindo, and Y. Matsumoto (2018) Chemical compounds knowledge visualization with natural language processing and linked data. In International Conference on Language Resources and Evaluation (LREC), Cited by: §8.
  • [35] V. Tshitoyan, J. Dagdelen, L. Weston, A. Dunn, Z. Rong, O. Kononova, K. A. Persson, G. Ceder, and A. Jain (2019) Unsupervised word embeddings capture latent knowledge from materials science literature. Nature. Cited by: §4.1, Table 4.
  • [36] M. Tsubaki, M. Shimbo, and Y. Matsumoto (2017) Protein fold recognition with representation learning and long short-term memory. IPSJ Transactions on Bioinformatics. Cited by: §8.
  • [37] J. Wei, X. Chu, X. Sun, K. Xu, H. Deng, J. Chen, Z. Wei, and M. Lei (2019) Machine learning in materials science. InfoMat. Cited by: §1.
  • [38] X. Zhang, J. J. Zhao, and Y. LeCun (2015) Character-level convolutional networks for text classification. In Neural Information Processing Systems (NeurIPS), Cited by: §4.1, Table 4.