DeepAI
Log In Sign Up

A Hybrid Model of Classification and Generation for Spatial Relation Extraction

08/15/2022
by   Feng Wang Peifeng Li, et al.
0

Extracting spatial relations from texts is a fundamental task for natural language understanding and previous studies only regard it as a classification task, ignoring those spatial relations with null roles due to their poor information. To address the above issue, we first view spatial relation extraction as a generation task and propose a novel hybrid model HMCGR for this task. HMCGR contains a generation and a classification model, while the former can generate those null-role relations and the latter can extract those non-null-role relations to complement each other. Moreover, a reflexivity evaluation mechanism is applied to further improve the accuracy based on the reflexivity principle of spatial relation. Experimental results on SpaceEval show that HMCGR outperforms the SOTA baselines significantly.

READ FULL TEXT VIEW PDF
11/17/2021

Multi-Attribute Relation Extraction (MARE) – Simplifying the Application of Relation Extraction

Natural language understanding's relation extraction makes innovative an...
05/05/2022

FastRE: Towards Fast Relation Extraction with Convolutional Encoder and Improved Cascade Binary Tagging Framework

Recent work for extracting relations from texts has achieved excellent p...
03/05/2021

Dual Pointer Network for Fast Extraction of Multiple Relations in a Sentence

Relation extraction is a type of information extraction task that recogn...
06/05/2020

Relation of the Relations: A New Paradigm of the Relation Extraction Problem

In natural language, often multiple entities appear in the same text. Ho...
08/15/2018

Incorporating Consistency Verification into Neural Data-to-Document Generation

Recent neural models for data-to-document generation have achieved remar...
05/04/2022

A Dataset for N-ary Relation Extraction of Drug Combinations

Combination therapies have become the standard of care for diseases such...
06/30/2021

HySPA: Hybrid Span Generation for Scalable Text-to-Graph Extraction

Text-to-Graph extraction aims to automatically extract information graph...

1 Introduction

Spatial relation extraction focuses on identifying the relationship between two geographical entities in natural language texts. Currently, only a few studies focused on this task in the NLP community, while most studies aimed at the other tasks of relation extraction, such as temporal and causal relation extraction. However, spatial information is one kind of critical information for natural language understanding, which can benefit the downstream NLP applications, such as spatial domain query Zhang et al. (2020), spatial reference Yang et al. (2020) and data forecasting Song et al. (2020).

Figure 1: An example in SpaceEval with null-role and non-null-role.

Various kinds of schemes have been proposed to represent spatial relation. As one of the SemEval evaluation tasks, SpaceEval Pustejovsky et al. (2015) proposes an annotation scheme adopted from ISO-space Pustejovsky et al. (2011)

and its goals include identifying and classifying items from an inventory of spatial concepts, such as topological relations, orientational relations, and motion, etc. Commonly, this task needs to extract the spatial elements and classify static and dynamic spatial relations into three types: the move link (MOVELINK), the qualitative spatial link (QSLINK), and the orientation link (OLINK). MOVELINK connects motion-events with corresponding mover-participants as a triplet of three roles

(mover, goal, trigger), while QSLINK and OLINK refer to the topological relation and non-topological relation between two spatial elements, respectively, and are formalized as a triplet of three roles (trajector, landmark, trigger).

Following previous work, we also simplified the whole task as Figure 1 and only focus on extracting QSLINK/OLINK/MOVELINK from texts. Thus, a spatial relation is defined as a triplet with three spatial types. The spatial relation can be divided into two classes: null-role and non-null-role relations. The former refers to a relation containing null-value roles, such as two MOVELINKs and one OLINK in Figure 1, while the latter (e.g., QSLINK in Figure 1) refers to a relation whose three roles are fulfilled the values extracted from sentences.

Almost all previous studies regarded spatial relation extraction as a classification task using traditional machine learning

Nichols and Botros (2015); D’Souza and Ng (2015)

or neural network methods

Ramrakhiyani et al. (2019); Shin et al. (2020). Those classification models work well on extracting those non-null-role relations due to their rich information. However, they often suffer from those null-role relations. The reason is that some information is missing in these relations. Moreover, they also cannot benefit from the knowledge of the spatial schema, such as the roles and their relations.

In the annotation stage, annotators usually not only annotate relations and relation types, but also provide a description or basis for their annotation implicitly. Therefore, we hope the model can simulate a human and provide a target sentence instead of a simple label index for understanding the spatial relation deeper. The target sentence in generation models can describe the relation between all spatial elements and it allows null slots (i.e., roles) to exist. Thus, the generation model not only can more explicitly learn the semantics of the spatial relations through such a form of the learning goal, but also can generate those null-role relations.

Moreover, the classification model and the generation model have their complementary advantages. The former usually has better performance on no-null-role relations, while the latter can introduce prior knowledge to capture the semantics of null-role relations better and its results are in a natural language expression with stronger interpretability Jiang et al. (2021). Therefore, we combine the advantages of the classification and generation models to further capture different knowledge.

In this paper, we propose a novel hybrid model HMCGR (Hybrid Model of Classification, Generation and Reflexivity) for spatial relation extraction, which contains a generation model and a classification model. Specially, the former can generate those null-role relations and the latter can extract those non-null-role relations to complement each other. Moreover, a reflexivity evaluation mechanism is applied to further improve the accuracy based on the reflexivity principle of spatial relation. Experimental results on the SpaceEval dataset shows that our HMCGR outperforms the SOTA baselines significantly.

2 Related Work

Various kinds of schemes have been proposed to represent spatial relations. SpatialML Mani et al. (2010) characterized directional and topological relations among locations in terms of a region calculus. The SpRL task Kordjamshidi et al. (2011) developed a semantic role labeling scheme focusing on the main roles in spatial relations. Spatial relation extraction was introduced as subtask at SemEval 2012 Kordjamshidi et al. (2012), SemEval 2013 Kolomiyets et al. (2013) and SemEval 2015 Pustejovsky et al. (2015). As the Task 8 of SemEval 2015, SpaceEval proposed an annotation scheme adopted from ISO-space, and it enriched SpRL’s semantics by refining the granularity. Most of previous studies were evaluated on this dataset.

The task of spatial relation extraction can be divided into traditional machine learning and neural network methods. The former highly relies on manual features or explicit syntactic structures. Nichols and Botros (2015) used a CRF layer to extract spatial elements, and then introduced SVM to classify spatial relations. D’Souza and Ng (2015)

proposed a Sieve-based model where various kinds of manual features are generated by a greedy feature selection technique.

Salaberri et al. (2015) introduced external knowledge as a supplement to spatial information, in which WordNet and PropBank provided information on many spatial elements. Kim and Lee (2016) proposed a Korean spatial relation extraction model using dependency relations to find the proper elements to fulfill roles.

With the wide application of neural network, Ramrakhiyani et al. (2019) generated candidate relations by dependency parsing and classified the candidates with a BiLSTM model. Shin et al. (2020) first used BERT-CRF to extract the spatial roles and then introduced R-BERT Wu and He (2019) to extract the spatial relations. Besides, a few studies focused on multi-modal spatial relation extraction. For example, Dan et al. (2020) proposed a spatial BERT which gives two spatial entities and a picture to determine spatial relations.

Figure 2: Overall structure of our HMCGR.

3 Hmcgr

Figure 2 shows the overall architecture of our model HMCGR. As a whole, HMCGR can be divided into four modules, i.e., candidate triplet extraction (CTE), spatial relation classification (CLS), spatial relation generation (GEN), and Reflexivity evaluation (RFX).

The module CTE is first used to extract spatial elements and spatial roles from a raw sentence to obtain candidate triplets by a BERT-CRF model. And then the candidate triplets and the raw sentence are fed to the module CLS, which uses a BERT encoder and a T5 encoder to encode the sentence, respectively, and apply a GCN (Graph Convolutional Networks) layer to capture the sentence structure. Simultaneously, the module GEN uses a T5 decoder to generate a target sentence following a specific template, and the module RFX uses the cosine function to calculate the similarity between the original sentence and its inverted sentence to further improve the accuracy.

3.1 CTE: Candidate Triplet Extraction

Since a spatial relation is represented as a triplet with its relation type MOVELINK, OLINK or QSLINK, the first step of HMCGR is to extract candidate triplets from raw texts as much as possible. Similar to Shin et al. (2020), we also use the BERT+CRF model for spatial role extraction, as showed in Figure 3. Spatial role extraction is a task to form candidate triplets, which extracts the spatial elements from texts and then assigns a role to each extracted element.

Formally, the input is a token sequence where is the -th token in a sentence . We feed with the label CLS to BERT to obtain a new embedding from BERT which where and is the pre-defined spatial role set.

In Figure 3, there are two CRF layers with the input embedding , i.e., the Spatial Element CRF SE-CRF and the Spatial Role CRF SR-CRF. We use SE-CRF to obtain the spatial element set in where is a spatial element, and use SR-CRF to obtain the role set for all elements where is the spatial role of the element .

We simply apply a multi-task framework to train these two CRFs and they share the same BERT encoder layer. Take the sentence in Figure 1 as example, we can extract six spatial elements “children”, “school”, “in”, “who”, “at” and “recess”, whose roles are Spatial Entity, Place, Spatial Signal, Spatial Entity, Spatial Signal and Place, respectively.

Due to CTE is the first stage of HMCGR, we tend to generate all possible spatial role triplets for the subsequent CLS module to achieve high recall. Hence, we first split the set into three subsets : 1) ={r, }, 2) ={, }, and 3) ={} according to their roles. Take the above elements for example, “children” and “who” belong to , while “school” and “recess” belong to and the others belong to .

Figure 3: Overview of candidate triplet extraction.

Finally, we enumerate possible triplets as candidates following the spatial relation definition. Commonly, some triplets may have the roles with null values, as the role showed in Figure 1, because its element does not mention in the according sentence. If we enumerate all possible triplets including null roles as candidates, this will introduce enormous negative triplets into the candidate set and then harm the precision badly. For example, there are 27 () candidate triplets in the sentence in Figure 1, while only 4 are annotated triplets. Hence, we do not generate the triplets with null-value roles in the module CTE and the extracted candidate triplet set of the example in Figure 1 is as follows: (who, at, recess), (who, in, school), (who, at, school), (who, in, recess), (children, in, school), (children, at, school), (children, in, recess) and (children, at, recess).

3.2 CLS: Spatial Relation Classification

Following previous work, CLS is to classify the candidate triplets into four types, i.e., MOVELINK, OLINK, QSLINK, and null. If a triplet belongs to the type null, this triplet is a pseudo spatial relation. The reason that we introduce the type null to CLS is that there are lots of pseudo triplets extracted by CTE and they will harm the precision.

3.2.1 Encoding

First, we simply use BERT and T5 to encode the sequence 111We add [cls] to the start of to obtain the sentence representation of BERT. of the sentence to obtain the embeddings and , respectively. To make better use of the advantages of both two pre-training models, we use cross attention to represent the hidden layer state as follows. In this way, we can get the new embeddings and while the latter is used in the RFX module.

(1)
(2)

Second, we incorporate the candidate triplets extracted by CTE into the above embedding to enhance the representation of spatial elements. Specially, we introduce the SelfAttentiveSpanExtractor in AllenNLP to obtain the latent representation of three spatial roles , and as follows.

(3)

where . and represent the start and end position of a spatial element, respectively, and are learnable parameters. Besides, since BERT maybe splits a word into multiple word-pieces, we also use SelfAttentiveSpanExtractor to obtain word-level representation.

3.2.2 Spatial GCN

Most previous work ignored the function of demonstrative pronouns in spatial relation extraction. However, those pronouns can participate in various spatial relations. Inspired by Phu and Nguyen (2021) in casual relation extraction, to capture the relationship between sentences and spatial roles, and make better use of sentence structure and anaphora, we introduce a spatial graph to CLS, where the node set , which are defined in subsection 3.1. We initialize four adjacency matrices (, , , ) to represent four edge types in our graph as follows.

Sentence Boundary Edge: Intuitively, relevant contextual information between the spatial elements within a sentence is helpful for this task. Hence, we create an undirected edge between two nodes if they are in the same sentence. Formally, we set if the nodes and () in the same sentence; otherwise, 0.

Spatial Element Edge: The intersections between the spatial elements and their containing tokens maybe share some useful information. Therefore, we create a spatial element edge between the spatial element and its token. Formally, we set if contains ; otherwise, 0.

Coreference Edge: According to our statistics, about 20% of the spatial relations in SpaceEval are participated by demonstrative pronouns. Hence, we construct an edge from two nodes if one can reference the other. Formally, we set if and are coreferential; otherwise, 0.

Dependency Edge: Following previous work in NLP, we also create an edge if two nodes have the same parent node in the dependency tree. Formally, we set if and has the same parent node in a dependency tree. Besides, we utilize SpaCy 222https://spacy.io/ to extract the dependency trees and coreference chains.

Due to the importance of different type edges, we conduct four learnable weight matrices () to merge four type edges by their weights to an adjacency matrix as follows.

(4)

Finally, we can easily construct the graph and formulate GCN for spatial information fusion to obtain its representation as follows.

(5)

3.2.3 Classification

By recording the node identifier of the spatial role in the currently processed triplet, we can get the latent representation of the spatial role in . Inspired by the idea of ResNet He et al. (2016), we concat the BERT hidden state ( ) and the representation of the GCN nodes as the final feature of the spatial roles as follows.

(6)

where represents the latent representation of the spatial roles in

. Finally, a multi-layer perceptron (MLP) is to classify the spatial relations and we calculate the cross-entropy loss as follows.

(7)
(8)

where is the triplet set mentioned in subsection 3.1 and is the relation of the triplet.

3.3 GEN: Spatial Relation Generation

To reduce negative triplets in the CTE module, we only enumerate candidate triplets without null roles. This strategy can help CLS improve its precision. However, it also cannot extract those null-role relations. Our statistics on the SpaceEval dataset show 20% of the annotated spatial relations have a null role. Hence, how to extract those null-role relations still is a challenge. To address this issue, we introduce a spatial relation generation module GEN to extract those null-role relations. Hence, HMCGR contains a classification and a generation model, and they can complement each other to address their shortcomings.

We introduce the pre-trained generation model T5 to our GEN, due to its excellent performance on many NLP applications Colin et al. (2020). Normally, there are two T5-decoding methods that can be used in our task, i.e., triplet or a normal sentence. In our experiments, we found that a structure normal sentence is suitable for the target sentence of T5, which contains the following three parts.

Referential Phrase Prefix: To better use the coreference relation, we add a phrase with referential meaning to the target sentence and put this phrase in the beginning of the target sentence to let our GEN use this useful information.

Relation Name: To get the type of spatial relation, we designed a slot of spatial relation name for the target sentence.

Relation Explanation: To decode spatial relations more quickly and conveniently, we design a generate structured sentence with pad spatial role slots as our target sentence.

Specifically, the form of target sentence is as follows: “The token “” stands for “”, and can be describe as following : the first element is , the trigger is , and the second element is .” Take the candidate triplet (who, at, recess) as an example, we generate the following target statement for T5:“The token “who” stands for “children”, and qslink can be describe as following : the first element is who , the trigger is at , and the second element is recess .”.

We feed a sentence representation into T5-decoder and obtain a target sentence following the format of , which can be translated into the form . It is worth noting that one of , and may be null, and we can obtain those null-role relations. Finally, T5 generates a token or phrase for each output position using softmax and then we can get the target sentence and the cross-entropy loss is defined as follows.

(9)

3.4 RFX: Reflexivity Evaluation

Our CLS and GEN can extract spatial relations from different perspectives and complement each other effectively. However, the performance of GEN is still lower than CLS, because it suffers from the limited training data and the high rate of negative and positive instances in this task.

Most spatial relations have the attribute of reflexivity due to their nature. For example, "A in B" equals to "B out of A" in spatial relation. According to the reflexivity of spatial relation, we design a similarity-based reflexivity evaluation mechanism to help GEN improve its performance. RFX first creates an inverted sentence according to the original sentence and a candidate triplet , and then uses the cosine function to calculate the similarity of their embeddings. If two sentences are similar, the candidate triplet

will be regarded as a spatial relation with high probability.

For a sentence and a candidate triplet , we first exchange the positions of two participants and in , and then replace with its antonym from an antonym dictionary. If has more than one antonym, we randomly select one. The original sentence and the inverted sentence are fed to a T5-encoder to obtain the embeddings using cross attention and , respectively. The avg-polling is applied to the above two embeddings to capture their global features as follows.

(10)
(11)

Finally, we design the spatial semantic loss

using a cosine similarity as follows.

(12)

3.5 Joint Training and Decoding

In the training step, we train the classification model CLS and the generation model GEN together. To sum up, the overall loss of our model HMCGR consists of three parts as follows.

(13)

Finally, the spatial relations are extracted by two models, i.e., CLS and GEN. The final spatial relation set is the union of their results. Besides, the module RFX is an effective auxiliary task to help GEN improve its performance.

Tool/Parameter Version/Value
Pytorch 1.7.0+cu110
Spacy 2.1.0
Allennlp 2.6.0
dgl-cu110 0.6.1
Learning rate 2e-5
Batch size 4
Random seed 1024
Hidden size of pre-training model 768
Optimizer AdamW
Table 1: Key parameters and tools used in our model.
Model P R F1
BERT+CRF 88.1 91.2 89.1
Table 2: The results of spatial role extraction.
Model QSLINK OLINK MOVELINK Overall
P R F1 P R F1 P R F1 P R F1
Sieve-Based 12.9 28.3 17.8 100 31.2 47.5 24.5 56.2 34.2 45.8 38.5 41.8
WordNet - - - - - - - - - 54.0 51.0 53.0
SpRL-CWW 66.1 53.8 59.4 69.1 51.7 59.1 57.1 45.1 50.4 63.6 50.1 56.1
BERT-base 45.1 58.3 50.5 71.0 69.6 70.2 62.7 61.5 62.1 62.7 59.8 61.2
HMCGR 53.5 73.1 61.1 73.1 85.2 78.6 66.8 83.0 73.9 64.3 79.2 70.9
Table 3: Performance comparison between the baselines and HMCGR on spatial relation extraction. Since BERT-base did not report the results on each category, we run their model to obtain the results (underlined).

4 Experimentation

4.1 Experimental Settings

We evaluate our model on the latest dataset SpaceEval. According to the official statistics, there are 1110 QSLINKs, 974 MOVELINKs and 287 OLINKs. We use the standard training/development/test set following previous work Shin et al. (2020) where the rate of the training set and the test set is 8:2. As for evaluation, we report Precision (P), Recall (R), and Micro-F1 score. We use Pytorch and Huggingface as our base tools and use the base versions of BERT and T5. The specific tool versions and key hyper-parameters are shown in Table 1.

Currently, only a few work focused on spatial relation extraction. To evaluate the effectiveness of our HMCGR, we conduct the following strong baselines for comparison: 1) Sieve-Based D’Souza and Ng (2015), which used the sieve mechanism and syntactic parse trees to enhance the features in spatial relations; 2) WordNet Salaberri et al. (2015), which used WordNet as an external knowledge to assist their task; 3) SpRL-CWW Nichols and Botros (2015), which is the SOTA traditional model using SVM and CRF classifiers on the GloVe features to extract the spatial relations; 4) BERT-base Shin et al. (2020), which is the SOTA neural network model using a BERT-based neural network model on the spatial elements extraction and spatial relation extraction.

4.2 Experimental Results

The results of spatial role extraction on SpaceEval is showed in Table 2 and its performance is similar with Shin et al. (2020). In the stage of CTE, we get 3096 candidate triplets, in which 1355 triplets are positive and 1741 triplets are negative. These figures show that the number of the negative instances is more than that of the positive ones. If we use the null value to construct the candidate triplets, the large number of negative instances will harm the performance critically.

Table 3 shows the overall performance of the baselines and our HMCGR on SpaceEval. Compared with the SOTA baseline BERT-base, our HMCGR significantly improves the overall F1-score by 9.7, especially the Recall with a gain of 19.4. This result verifies the effectiveness of HMCGR, and indicates that our generation model GEN and our classification model CLS can promote each other. Moreover, the improvement comes from all three links QSLINK, OLINK, MOVELINK with the gains of 10.6, 8.4, and 11.8, respectively. This result shows that our HMCGR works well on all links. It is worth noting that our improvement mainly comes from the recall and this indicates that the generation model is helpful to recover those null-role relations.

Model P R F1
BERT-base 44.5 31.7 37.0
HMCGR 46.7 40.0 43.0
Table 4: The results of spatial relation extraction on null-role relations.

5 Analysis

5.1 Analysis on Null-role Relations

To further verify the effectiveness of our GEN, we count the null-role relations and Table 4 shows the performance of BERT-base and HMCGR. Compared with BERT-base, HMCGR improves the F1-score by 6.0, especially the significant gain on recall (+8.3). This result verifies our motivation that the generation model GEN is effective to extract those null-role relations. However, only 40.0% of null-role relations in the test set are extracted by GEN and this indicates that the null-role relation extraction has much room for improvement.

Model P R F1
HMCGR 64.3 79.2 70.9
GEN 60.4 53.1 56.5
CLS 62.0 65.5 63.7
GEN+CLS 64.1 75.2 69.2
GEN+RFX 62.2 55.1 58.8
CLS+RFX 62.0 62.5 62.2
Table 5: Ablation study on different modules.

5.2 Ablation Study on Different Modules

We conduct the ablation experiments to verify the effectiveness of the modules used in HMCGR, and Table 5 shows the results of the simplified models.

The performance descents of both single GEN and CLS are very large, in comparison with the hybrid HMCGR. This result indicates a single classification or generation model maybe cannot extract those null-role and non-null-role relations simultaneously. Moreover, the performance of GEN is lower than that of CLS and the reason is that the number of non-null-role relations is twice as much as that of null-role relations. Besides, CLS works better than BERT-base and this verifies the success of our classification model. However, the performance of GEN is lower than that of BERT-base and this indicates how to apply generation model to the traditional classification tasks still is a challenge.

The combination model GEN+CLS outperforms GEN and CLS, with huge gains of 12.7 and 5.5, respectively. This indicates GEN and CLS can boost each other to improve THE F1-score, especially the recall. In the SpaceEval dataset, 32.3% of the spatial relations belong to null-role one, in which 65.3% of the null-role relations do not have the role and the others do not have the role . Our GEN can recover almost 40% null-role relations and this indicates that the generation model prefers to extract those null-role relations. Moreover, the decision from two different models also can further improve the performance on different perspectives.

Compared with GEN, GEN+RFX improves the F1-score by 2.3 with the gains in both the precision and recall. This indicates that our reflexivity evaluation mechanism RFX can not only help the generation model to extract more spatial relations, but also filter out the pseudo relations. However, the F1-score of CLS+RFX is lower than that of CLS, especially the recall. Among three relation types, only the F1-score of MOVELINK decreases from 62.5 to 61.0. The reason is that some triggers in MOVELINKs do not have an antonym (e.g., "run" and "biking") and some sentences cannot be inverted. Besides, compared with GEN+CLS, HMCGR improves the F1-score by 1.7, with a gain of 4.0 on the recall. This verifies that RFX is helpful to discover more relations in a hybrid model.

Model P R F1
HMCGR 64.3 79.2 70.9
w/o GCN 63.3 74.7 68.1
w/o CrossAtt 62.1 74.2 67.6
Table 6: Results of HMCGR and its simplified version on SpaceEval.

5.3 Analysis on CLS

To verify the contributions of the components in CLS, We conduct the following two simplified versions of HMCGR: 1) w/o GCN: the GCN layer is removed from HMCGR; 2) w/o CrossAtt: the cross attention is removed. That is, we only use BERT to encode sentences.

Table 6 shows the results of HMCGR and its simplified versions. If we remove the GCN layer and the cross attention, the F1-score will decrease by 2.9 and 3.3, respectively. This result indicates that T5 is helpful for BERT to represent the sentence from different perspectives. As for the GCN layer, we find out that the coreference edge is the main contributor, and more than 90% of the improvement comes from this edge type.

5.4 Error Analysis

The errors of our HMCGR mainly come from those in CTE, GEN, and entity coreference. In table 2, we can find out that 8.8% of spatial relations are missing and 11.9% of pseudo relations are introduced to the following modules by CTE.

Our statistics on the results shows that GEN often badly predicts those null-role relations when there are a non-null-role relation and a null-role relation in the same sentence. Since T5 is a sequential generation model, the generation of the next spatial relation will be affected by the relation predicted above. That is, if the previous relation is non-null-role one, the current relation tends to be non-null-role too. Take Table 7 as an example, there are two MOVELINKs in the sentence. After HMCGR has extracted the first relation MOVELINK(cattle, to, fields), it tends to predict the next one as MOVELINK(men, to, fields), instead of MOVELINK(cattle, null, fields).

Although the coreference edge is the most effective one in the graph, lots of errors derive from it due to its low performance.

Sentence: There were already old men taking cattle
out to the fields to graze.
Gold MOVELINKs:
{: cattle, : to, : fields}
{: men, : null, : fields}
Predicted MOVELINKs:
{: cattle, : to, : fields}
{: men, : to, : fields}
Table 7: Examples of the errors in GEN.

6 Conclusion

In this paper, we propose a novel hybrid model HMCGR for spatial relation extraction. The generation model GEN can generate those null-role relations, while the classification model CLS can extract those non-null-role relations to complement each other. Moreover, a reflexivity evaluation mechanism is applied to further improve the accuracy based on the reflexivity of spatial relation. Experimental results on the SpaceEval dataset show that our HMCGR outperforms the SOTA baseline significantly. Our future work will focus on how to extract those null-role relations effectively.

References

  • R. Colin, S. Noam, R. Adam, L. Katherine, S. Narang, M. Michael, Z. Yanqi, L. Wei, and L. Peter J. (2020)

    Exploring the limits of transfer learning with a unified text-to-text transformer

    .
    JMLR 21 (140), pp. 1–67. External Links: Link Cited by: §3.3.
  • J. D’Souza and V. Ng (2015) Sieve-Based Spatial Relation Extraction with Expanding Parse Trees. In EMNLP, L. Màrquez, C. Callison-Burch, J. Su, D. Pighin, and Y. Marton (Eds.), pp. 758–768. External Links: Link, Document Cited by: §1, §2, §4.1.
  • S. Dan, H. He, and D. Roth (2020) Understanding Spatial Relations through Multiple Modalities. CoRR abs/2007.09551. External Links: Link, 2007.09551 Cited by: §2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In IEEE CVPR, pp. 770–778. External Links: Link Cited by: §3.2.3.
  • F. Jiang, Y. Fan, X. Chu, P. Li, and Q. Zhu (2021) Not Just Classification: Recognizing Implicit Discourse Relation on Joint Modeling of Classification and Generation. In EMNLP, pp. 2418–2431. External Links: Link Cited by: §1.
  • B. Kim and J. S. Lee (2016) Extracting Spatial Entities and Relations in Korean Text. In COLING, pp. 2389–2396. External Links: Link Cited by: §2.
  • O. Kolomiyets, P. Kordjamshidi, M. F. Moens, and S. Bethard (2013) Semeval-2013 task 3: Spatial role labeling. In SemEval, pp. 255–262. External Links: Link Cited by: §2.
  • P. Kordjamshidi, S. Bethard, and M. Moens (2012) SemEval-2012 task 3: Spatial role labeling. In SemEval, Vol. , pp. 365–373. External Links: Link Cited by: §2.
  • P. Kordjamshidi, M. Van Otterlo, and M. F. Moens (2011) Spatial role labeling: Towards extraction of spatial relations from natural language. ACM Transactions on Speech and Language Processing 8 (3), pp. Article No. 4:1–36. External Links: Link Cited by: §2.
  • I. Mani, C. Doran, D. Harris, J. Hitzeman, R. Quimby, J. Richer, B. Wellner, S. Mardis, and S. Clancy (2010) SpatialML: annotation scheme, resources, and evaluation. Language Resources and Evaluation 44 (3), pp. 263–280. External Links: Link Cited by: §2.
  • E. Nichols and F. Botros (2015) SpRL-CWW: Spatial Relation Classification with Independent Multi-class Models. In SemEval, D. M. Cer, D. Jurgens, P. Nakov, and T. Zesch (Eds.), pp. 895–901. External Links: Link, Document Cited by: §1, §2, §4.1.
  • M. T. Phu and T. H. Nguyen (2021) Graph Convolutional Networks for Event Causality Identification with Rich Document-level Structures. In NAACL-HLT, K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-Tür, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty, and Y. Zhou (Eds.), pp. 3480–3490. External Links: Link, Document Cited by: §3.2.2.
  • J. Pustejovsky, P. Kordjamshidi, M. Moens, A. Levine, S. Dworman, and Z. Yocum (2015) SemEval-2015 Task 8: SpaceEval. In SemEval, D. M. Cer, D. Jurgens, P. Nakov, and T. Zesch (Eds.), pp. 884–894. External Links: Link, Document Cited by: §1, §2.
  • J. Pustejovsky, J. L. Moszkowicz, and M. Verhagen (2011) Using ISO-Space for annotating spatial information. In Proceedings of the International Conference on Spatial Information Theory, External Links: Link Cited by: §1.
  • N. Ramrakhiyani, G. K. Palshikar, and V. Varma (2019) A Simple Neural Approach to Spatial Role Labelling. In ECIR, L. Azzopardi, B. Stein, N. Fuhr, P. Mayr, C. Hauff, and D. Hiemstra (Eds.), , Vol. , pp. 102–108. External Links: Link, Document Cited by: §1, §2.
  • H. Salaberri, O. Arregi, and B. Zapirain (2015) IXAGroupEHUSpaceEval: (X-Space) A WordNet-based approach towards the Automatic Recognition of Spatial Information following the ISO-Space Annotation Scheme. In SemEval, D. M. Cer, D. Jurgens, P. Nakov, and T. Zesch (Eds.), pp. 856–861. External Links: Link, Document Cited by: §2, §4.1.
  • H. J. Shin, J. Y. Park, D. B. Yuk, and J. S. Lee (2020) BERT-based Spatial Information Extraction. In SPLU, pp. 10–17. External Links: Link, Document Cited by: §1, §2, §3.1, §4.1, §4.1, §4.2.
  • C. Song, Y. Lin, S. Guo, and H. Wan (2020) Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. In AAAI, Vol. , pp. 914–921. External Links: Link Cited by: §1.
  • S. Wu and Y. He (2019) Enriching pre-trained language model with entity information for relation classification. In CIKM, pp. 2361–2364. External Links: Link Cited by: §2.
  • T. Yang, A. S. Lan, and K. Narasimhan (2020) Robust and Interpretable Grounding of Spatial References with Relation Networks. arXiv preprint arXiv:2005.00696. External Links: Link Cited by: §1.
  • L. Zhang, R. Wang, J. Zhou, J. Yu, Z. Ling, and H. Xiong (2020) Joint Intent Detection and Entity Linking on Spatial Domain Queries. In Findings of EMNLP 2020, pp. 4937–4947. External Links: Link Cited by: §1.