Modeling Graph Structure in Transformer for Better AMR-to-Text Generation

08/31/2019 ∙ by Junhui Li, et al. ∙ 0

Recent studies on AMR-to-text generation often formalize the task as a sequence-to-sequence (seq2seq) learning problem by converting an Abstract Meaning Representation (AMR) graph into a word sequence. Graph structures are further modeled into the seq2seq framework in order to utilize the structural information in the AMR graphs. However, previous approaches only consider the relations between directly connected concepts while ignoring the rich structure in AMR graphs. In this paper we eliminate such a strong limitation and propose a novel structure-aware self-attention approach to better modeling the relations between indirectly connected concepts in the state-of-the-art seq2seq model, i.e., the Transformer. In particular, a few different methods are explored to learn structural representations between two concepts. Experimental results on English AMR benchmark datasets show that our approach significantly outperforms the state of the art with 29.66 and 31.82 BLEU scores on LDC2015E86 and LDC2017T10, respectively. To the best of our knowledge, these are the best results achieved so far by supervised models on the benchmarks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

AMR-to-text generation is a task of automatically generating a natural language sentence from an Abstract Meaning Representation (AMR) graph. Due to the importance of AMR as a widely adopted semantic formalism in representing the meaning of a sentence banarescu_etal:13, AMR has become popular in semantic representation and AMR-to-text generation has been drawing more and more attention in the last decade. As the example in Figure 1(a) shows, nodes, such as he and convict-01, represent semantic concepts and edges, such as “:ARG1” and “:quant”, refer to semantic relations between the concepts. Since two concepts close in an AMR graph may map into two segments that are distant in the corresponding sentence, AMR-to-text generation is challenging. For example in Figure 1, the neighboring concepts he and convict-01 correspond to the words he and convicted which locate at the different ends of the sentence.

To address the above mentioned challenge, recent studies on AMR-to-text generation regard the task as a sequence-to-sequence (seq2seq) learning problem by properly linearizing an AMR graph into a sequence konstas_etal_acl:17. Such an input representation, however, is apt to lose useful structural information due to the removal of reentrant structures for linearization. To better model graph structures, previous studies propose various graph-based seq2seq models to incorporate graphs as an additional input representation song_etal_acl:18; beck_etal_acl:18; damonte_etal_naacl:19. Although such graph-to-sequence models can achieve the state-of-the-art results, they focus on modeling one-hop relations only. That is, they only model concept pairs connected directly by an edge song_etal_acl:18; beck_etal_acl:18, and as a result, ignore explicit structural information of indirectly connected concepts in AMR graphs, e.g. the relation between concepts he and possible in Figure 1.

To make better use of structural information in an AMR graph, we attempt to model arbitrary concept pairs no matter whether directly connected or not. To this end, we extend the encoder in the state-of-the-art seq2seq model, i.e., the Transformer vaswani_etal_nips:17 and propose structure-aware self-attention encoding approach. In particular, several distinct methods are proposed to learn structure representations for the new self-attention mechanism. Empirical studies on two English benchmarks show that our approach significantly advances the state of the art for AMR-to-text generation, with the performance improvement of 4.16 BLEU score on LDC2015E86 and 4.39 BLEU score on LDC2017T10 respectively over the strong baseline. Overall, this paper makes the following contributions.

  • To the best of our knowledge, this is the first work that applies the Transformer to the task of AMR-to-text generation. On the basis of the Transformer, we build a strong baseline that reaches the state of the art.

  • We propose a new self-attention mechanism to incorporate richer structural information in AMR graphs. Experimental results on two benchmarks demonstrate the effectiveness of the proposed approach.

  • Benefiting from the strong baseline and the structure-aware self-attention mechanism, we greatly advance the state of the art in the task.

Figure 1: (a) An example of AMR graph for the sentence of He could be sentenced to 7 years in prison if convicted. (b) input to our baseline system, the seq2seq Transformer. (c) input to our proposed system based on structure-aware self-attention. (d) An example of graph structure extensions to sub-word units.

2 AMR-to-Text Generation with Graph Structure Modeling

We start by describing the implementation of our baseline system, a state-of-the-art seq2seq model which is originally used for neural machine translation and syntactic parsing 

vaswani_etal_nips:17. Then we detail the proposed approach to incorporating structural information from AMR graphs.

2.1 Transformer-based Baseline

Transformer: Our baseline system builds on the Transformer which employs an encoder-decoder framework, consisting of stacked encoder and decoder layers. Each encoder layer has two sublayers: self-attention layer followed by a position-wise feed forward layer. Self-attention layer employs multiple attention heads and the results from each attention head are concatenated and transformed to form the output of the self-attention layer. Each attention head uses scaled dot-product attention which takes a sequence of elements as input and computes a new sequence of the same length:

(1)

where and . Each output element

is a weighted sum of a linear transformation of input elements:

(2)

where

is matrix of parameters. The vectors

in Equation 2 are obtained by the self-attention model, which captures the correspondences between

and others. Specifically, the attention weight of each element is computed using a softmax function:

(3)

where

(4)

is an alignment function which measures how well the input elements and match. are parameters to be learned.

Input Representation: We use the depth-first traversal strategy as in Konstas et al. konstas_etal_acl:17 to linearize AMR graphs and to obtain simplified AMRs. We remove variables, wiki links and sense tags before linearization. Figure 1(b) shows an example linearization result for the AMR graph in Figure 1(a). Note that the reentrant concept he in Figure 1 (a) maps to two different tokens in the linearized sequence.

Vocabulary: Training AMR-to-text generation systems solely on labeled data may suffer from data sparseness. To attack this problem, previous works adopt techniques like anonymization to remove named entities and rare words konstas_etal_acl:17, or apply a copy mechanism gulcehre_etal_acl:16 such that the models can learn to copy rare words from the input sequence. In this paper we instead use two simple yet effective techniques. One is to apply Byte Pair Encoding (BPE) sennrich_etal_acl:16 to split words into smaller, more frequent sub-word units. The other is to use a shared vocabulary for both source and target sides. Experiments in Section  3.2 demonstrate the necessity of the techniques in building a strong baseline.

2.2 Modeling Graph Structures in Transformer

Input Representation: We also use the depth-first traversal strategy to linearize AMR graphs and to obtain simplified AMRs which only consist of concepts. As shown in Figure 1 (c), the input sequence is much shorter than the input sequence in the baseline. Besides, we also obtain a matrix which records the graph structure between every concept pair, which implies their semantic relationship (Section2.3).

Vocabulary: To be compatible with sub-words, we extend the original AMR graph, if necessary, to include the structures of sub-words. As sentence-01 in Figure 1(a) is segmented into sent@@ ence-01, we split the original node into two connected ones with an edge labeled as the incoming edge of the first unit. Figure 1(d) shows the graph structure for sub-words sent@@ ence-01.

Structure-Aware Self-Attention: Motivated by shaw_etal_naacl:18, we extend the conventional self-attention architecture to explicitly encode the relation between an element pair () in the alignment model by replacing Equation 4 with Equation 5. Note that the relation is the vector representation for element pair (), and will be learned in Section 2.3.

(5)

where is a parameter matrix. Then, we update Equation 2 accordingly to propagate structure information to the sublayer output by:

(6)

where is a parameter matrix.

2.3 Learning Graph Structure Representation for Concept Pairs

The above structure-aware self-attention is capable of incorporating graph structure between concept pairs. In this section, we explore a few methods to learn the representation for concept pairs. We use a sequence of edge labels, along the path from to to indicate the AMR graph structure between concepts and .111While there may exist two or more paths connecting and , we simply choose the shortest one. In order to distinguish the edge direction, we add a direction symbol to each label with for climbing up along the path, and for going down. Specifically, for the special case of , we use None as the path. Table 1 demonstrates structural label sequences between a few concept pairs in Figure 1.

Now, given a structural path with a label sequence and its -sized corresponding label embedding sequence , we use the following methods to obtain its representation vector , which maps to in Equation 5 and Equation 6.

Structural label sequence
he convict-01 :ARG1
he 7 :ARG1  :ARG2  :quant
he he None
Table 1: Examples of structural path between a few concept pairs in Figure 1.

Feature-based

A natural way to represent the structural path is to view it as a string feature. To this end, we combine the labels in the structural path into a string. Unsurprisingly, this will end up with a large number of features. We keep the most frequent ones (i.e., 20K in our experiments) in the feature vocabulary and map all others into a special feature UNK. Each feature in the vocabulary will be mapped into a randomly initialized vector.

Avg-based

To overcome the data sparsity in the above feature-based method, we view the structural path as a label sequence. Then we simply use the averaged label embedding as the representation vector of the sequence, i.e.,

(7)

Sum-based

Sum-based method simply returns the sum of all label embeddings in the sequence, i.e.,

(8)

Self-Attention-based (SA-based for short)

Figure 2: Self-Attention-based method.

As shown in Figure 2, given the label sequence , we first obtain the sequence , whose element is the addition of a word embedding and the corresponding position embedding. Then we use the self-attention, as presented in Eq. 1 to obtain its hidden states , i.e, , where . Our aim is to encode a variable length sentence into a -sized vector. Motivated by lin_etal_iclr:17, we achieve this by choosing a linear combination of the vectors in . Computing the linear combination requires an attention mechanism which takes the whole hidden states as input, and outputs a vector of weights :

(9)

where and . Then the label sequence representation vector is the weighted sum of its hidden states:

(10)

CNN-based

Motivated by kalchbrenner_etal_acl:14

, we use convolutional neural network (CNN) to convolute the label sequence

into a vector , as follow:

(11)
(12)

where kernel size is set to 4 in our experiments.

3 Experimentation

3.1 Experimental Settings

For evaluation of our approach, we use the sentences annotated with AMRs from the LDC release LDC2015E86 and LDC2017T10. The two datasets contain 16,833 and 36,521 training AMRs, respectively, and share 1,368 development AMRs and 1,371 testing AMRs. We segment words into sub-word units by BPE sennrich_etal_acl:16 with 10K operations on LDC2015E86 and 20K operations on LDC2017T10.

For efficiently learning graph structure representation for concept pairs (except the feature-based method), we limit the maximum label sequence length to 4 and ignore the labels exceeding the maximum. In SA-based method, we set the filter size as 128.

We use OpenNMT klein_etal:17 as the implementation of the Transformer seq2seq model.222https://github.com/OpenNMT/OpenNMT-py In parameter setting, we set the number of layers in both the encoder and decoder to 6. For optimization we use Adam with = 0.1 kingma_ba_iclr:15. The number of heads is set to 8. In addition, we set the embedding and the hidden sizes to 512 and the batch token-size to 4096. Accordingly, the and in Section 2 are 64. In all experiments, we train the models for 300K steps on a single K40 GPU.

For performance evaluation, we use BLEU papineni_etal_acl:02, Meteor banerjee_and_lavie:05; denkowski_and_lavie:14, and CHRF++ popovic_wmt:17 as metrics. We report results of single models that are tuned on the development set.

3.2 Experimental Results

Model BLEU Meteor CHRF++
Baseline 24.93 33.20 60.30
-BPE 23.02 31.60 58.09
-Share Vocab. 23.24 31.78 58.43
-Both 18.77 28.04 51.88
Table 2: Ablation results of our baseline system on the LDC2015E86 development set.

We first show the performance of our baseline system. As mentioned before, BPE and sharing vocabulary are two techniques we applied to relieving data sparsity. Table 2 presents the results of the ablation test on the development set of LDC2015E86 by either removing BPE, or vocabulary sharing, or both of them from the baseline system. From the results we can see that BPE and vocabulary sharing are critical to building our baseline system (an improvement from 18.77 to 24.93 in BLEU), revealing the fact that they are two effective ways to address the issue of data sparseness for AMR-to-text generation.

System LDC2015E86 LDC2017T10
BLEU Meteor CHRF++ #P (M) BLEU Meteor CHRF++
Baseline 25.50 33.16 59.88 49.1 27.43 34.62 61.85
Our Approach feature-based 27.23 34.53 61.55 49.4 30.18 35.83 63.20
avg-based 28.37 35.10 62.29 49.1 29.56 35.24 62.86
sum-based 28.69 34.97 62.05 49.1 29.92 35.68 63.04
SA-based 29.66 35.45 63.00 49.3 31.54 36.02 63.84
CNN-based 29.10 35.00 62.10 49.2 31.82 36.38 64.05
Previous works with single models
 konstas_etal_acl:17 22.00 - - - - - -
 cao_etal_naacl:19 23.5 - - - 26.8 - -
 song_etal_acl:18 23.30 - - - - - -
 beck_etal_acl:18 - - - - 23.3 - 50.4
 damonte_etal_naacl:19 24.40 23.60 - - 24.54 24.07 -
 guo_etal_tacl:19 25.7 - - - 27.6 - 57.3
 song_etal_emnlp:16 22.44 - - - - - -
Previous works with either ensemble models or unlabelled data, or both
 konstas_etal_acl:17 33.8 - - - - - -
 song_etal_acl:18 33.0 - - - - - -
 beck_etal_acl:18 - - - - 27.5 - 53.5
 guo_etal_tacl:19 35.3 - - - - - -
Table 3: Comparison results of our approaches and related studies on the test sets of LDC2015E86 and LDC2017T10. #P indicates the size of parameters in millions. indicates seq2seq-based systems while for graph-based models, and for other models. All our proposed systems are significant over the baseline at 0.01, tested by bootstrap resampling koehn_emnlp:04.

Table 3 presents the comparison of our approach and related works on the test sets of LDC2015E86 and LDC2017T10. From the results we can see that the Transformer-based baseline outperforms most of graph-to-sequence models and is comparable with the latest work by  guo_etal_tacl:19. The strong performance of the baseline is attributed to the capability of the Transformer to encode global and implicit structural information in AMR graphs. By comparing the five methods of learning graph structure representations, we have the following observations.

  • All of them achieve significant improvements over the baseline: the biggest improvements are 4.16 and 4.39 BLEU scores on LDC2015E86 and LDC2017T10, respectively.

  • Methods using continuous representations (such as SA-based and CNN-based) outperform the methods using discrete representations (such as feature-based).

  • Compared to the baseline, the methods have very limited affect on the sizes of model parameters (see the column of #P (M) in Table 3).

Finally, our best-performing models are the best among all the single and supervised models.

4 Analysis

In this section, we use LDC2017T10 as our benchmark dataset to demonstrate how our proposed approach achieves higher performance than the baseline. As representative, we use CNN-based method to obtain structural representation.

4.1 Effect of Modeling Structural Information of Indirectly Connected Concept Pairs

Our approach is capable of modeling arbitrary concept pairs no matter whether directly connected or not. To investigate the effect of modeling structural information of indirectly connected concept pairs, we ignore their structural information by mapping all structural label sequences between two indirectly connected concept pairs into None. In this way, the structural label sequence for he and 7 in Table 1, for example, will be None.

System BLEU
Baseline 27.43
Our approach 31.82
No indirectly connected concept pairs 29.92
Table 4: Performance on the test set of our approach with or without modeling structural information of indirectly connected concept pairs.

Table 4 compares the performance of our approach with or without modeling structural information of indirectly connected concept pairs. It shows that by modeling structural information of indirectly connected concept pairs, our approach improves the performance on the test set from 29.92 to 31.82 in BLEU scores. It also shows that even without modeling structural information of indirectly connected concept pairs, our approach achieves better performance than the baseline.

4.2 Effect on AMR Graphs with Different Sizes of Reentrancies

Linearizing an AMR graph into a sequence unavoidably loses information about reentrancies (nodes with multiple parents). This poses a challenge for the baseline since there exists on obvious sign that the first he and the second he, as shown in Figure 1 (b), refer to the same person. By contrast, our approach models reentrancies explicitly. Therefore, it is expected that the benefit of our approach is more evident for those AMR graphs containing more reentrancies. To test this hypothesis, we partition source AMR graphs to different groups by their numbers of reentrancies and evaluate their performance respectively. As shown in Figure 3, the performance gap between our approach and the baseline goes widest for AMR graphs with more than 5 reentrancies, on which our approach outperforms the baseline by 6.61 BLEU scores.

Figure 3: Performance (in BLEU) on the test set with respect to the reentrancy numbers of the input AMR graphs.

4.3 Effect on AMR Graphs with Different Sizes

When we encode an AMR graph with plenty concepts, linearizing it into a sequence tends to lose great amount of structural information. In order to test the hypothesis that graphs with more concepts contribute more to the improvement, we partition source AMR graphs to different groups by their sizes (i.e., numbers of concepts) and evaluate their performance respectively. Figure 4 shows the results which indicate that modeling graph structures significantly outperforms the baseline over all AMR lengths. We also observe that the performance gap between the baseline and our approach increases when AMR graphs become big, revealing that the baseline seq2seq model is far from capturing deep structural details of big AMR graphs. Figure 4 also indicates that text generation becomes difficult for big AMR graphs. We think that the low performance on big AMR graphs is mainly attributed to two reasons:

  • Big AMR graphs are usually mapped into long sentences while seq2seq model tends to stop early for long inputs. As a result, the length ratio333Length ratio is the length of generation output, divided by the length of reference. for AMRs with more than 40 concepts is 0.906, much lower than that for AMRs with less concepts.

  • Big AMR graphs are more likely to have reentrancies, which makes the generation more challenging.

Figure 4: Performance (in BLEU) on the test set with respect to the size of the input AMR graphs.

4.4 Case Study

Figure 5: Examples of generation from AMR graphs. (1) is from song_etal_acl:18, (2) - (5) are from damonte_etal_naacl:19. REF is the reference sentence. SEQ1 and G2S are the outputs of the seq2seq and the graph2seq models in song_etal_acl:18, respectively. SEQ2 and GRAPH are the outputs of the seq2seq and the graph models in damonte_etal_naacl:19, respectively.

In order to better understand the model performance, Figure 5 presents a few examples studied in song_etal_acl:18 (Example (1)) and damonte_etal_naacl:19 (Examples (2) - (5)).

In Example (1), though our baseline recovers a propositional phrase for the noun staff and another one for the noun funding, it fails to recognize the anaphora and antecedent relation between the two propositional phrases. In contrast, our approach successfully recognizes :prep-for c as a reentrancy node and generates one propositional phrase shared by both nouns staff and funding. In Example (2), we note that although AMR graphs lack tense information, the baseline generates output with inconsistent tense (i.e., do and found) while our approach consistently prefers past tense for the two clauses. In Example (3), only our approach correctly uses people as the subject of the predicate can. In Example (4), the baseline fails to predict the direct object you for predicate recommend. Finally in Example (5), the baseline fails to recognize the subject-predicate relation between noun communicate and verb need. Overall, we note that compared to the baseline, our approach produces more accurate output and deal with reentrancies more properly.

Comparing the generation of our approach and graph-based models in song_etal_acl:18 and damonte_etal_naacl:19, we observe that our generation is more close to the reference in sentence structure. Due to the absence of tense information in AMR graphs, our model tends to use past tense, as provided and did in Example (1) and (2). Similarly, without information concerning singular form and plural form, our model is more likely to use plural nouns, as centers and lawyers in Example (1) and (5).

5 Related Work

Most studies in AMR-to-text generation regard it as a translation problem and are motivated by the recent advances in both statistical machine translation (SMT) and neural machine translation (NMT). flanigan_etal_naacl:16 first transform an AMR graph into a tree, then specify a number of tree-to-string transduction rules based on alignments that are used to drive a tree-based SMT model graehl_knight_naacl:04. pourdamghani_etal:16 develop a method that learns to linearize AMR graphs into AMR strings, and then feed them into a phrase-based SMT model koehn_etal_naacl:03. song_etal_acl:17 use synchronous node replacement grammar (SNRG) to generate text. Different from synchronous context-free grammar in hierarchical phrase-based SMT chiang_cl:07, SNRG is a grammar over graphs.

Moving to neural seq2seq approaches, konstas_etal_acl:17 successfully apply seq2seq model together with large-scale unlabeled data for both text-to-AMR parsing and AMR-to-text generation. With special interest in the target side syntax, cao_etal_naacl:19 use seq2seq models to generate target syntactic structure, and then the surface form. To prevent the information loss in linearizing AMR graphs into sequences, song_etal_acl:18; beck_etal_acl:18 propose graph-to-sequence models to encode graph structure directly. Focusing on reentrancies, damonte_etal_naacl:19 propose stacking encoders which consist of BiLSTM Graves_etal:13, TreeLSTMs tai_etal_acl:15, and Graph Convolutional Network (GCN) duvenaud_etal:15; kipf_and_welling:16. guo_etal_tacl:19 propose densely connected GCN to better capture both local and non-local features. However, all the aforementioned graph-based models only consider the relations between nodes that are directly connected, thus lose the structural information between nodes that are indirectly connected via an edge path.

Recent studies also extend the Transformer to encode structural information for other NLP applications. shaw_etal_naacl:18 propose relation-aware self-attention to capture relative positions of word pairs for neural machine translation. ge_etal_ijcai:19 extend the relation-aware self-attention to capture syntactic and semantic structures. Our model is inspired by theirs but aims to encode structural label sequences of concept pairs. kedziorski_etal_naacl:19 propose graph Transformer to encode graph structure. Similar to the GCN, it focuses on the relations between directly connected nodes.

6 Conclusion and Future Work

In this paper we proposed a structure-aware self-attention for the task of AMR-to-text generation. The major idea of our approach is to encode long-distance relations between concepts in AMR graphs into the self-attention encoder in the Transformer. In the setting of supervised learning, our models achieved the best experimental results ever reported on two English benchmarks.

Previous studies have shown the effectiveness of using large-scale unlabelled data. In future work, we would like to do semi-supervised learning and use silver data to test how much improvements could be further achieved.

Acknowledgments

We thank the anonymous reviewers for their insightful comments and suggestions. We are grateful to Linfeng Song for fruitful discussions. This work is supported by the National Natural Science Foundation of China (Grant No. 61876120, 61673290, 61525205), and the Priority Academic Program Development of Jiangsu Higher Education Institutions.

References