-
Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning
We focus on graph-to-sequence learning, which can be framed as transduci...
read it
-
Deep Graph Convolutional Encoders for Structured Data to Text Generation
Most previous work on neural text generation from graph-structured data ...
read it
-
A Graph-to-Sequence Model for AMR-to-Text Generation
The problem of AMR-to-text generation is to recover a text representing ...
read it
-
Structural Information Preserving for Graph-to-Text Generation
The task of graph-to-text generation aims at producing sentences that pr...
read it
-
Higher-order Graph Convolutional Networks
Following the success of deep convolutional networks in various vision a...
read it
-
Enhancing AMR-to-Text Generation with Dual Graph Representations
Generating text from graph-based data, such as Abstract Meaning Represen...
read it
-
Promoting Graph Awareness in Linearized Graph-to-Text Generation
Generating text from structured inputs, such as meaning representations ...
read it
Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text Generation
AMR-to-text generation is used to transduce Abstract Meaning Representation structures (AMR) into text. A key challenge in this task is to efficiently learn effective graph representations. Previously, Graph Convolution Networks (GCNs) were used to encode input AMRs, however, vanilla GCNs are not able to capture non-local information and additionally, they follow a local (first-order) information aggregation scheme. To account for these issues, larger and deeper GCN models are required to capture more complex interactions. In this paper, we introduce a dynamic fusion mechanism, proposing Lightweight Dynamic Graph Convolutional Networks (LDGCNs) that capture richer non-local interactions by synthesizing higher order information from the input graphs. We further develop two novel parameter saving strategies based on the group graph convolutions and weight tied convolutions to reduce memory usage and model complexity. With the help of these strategies, we are able to train a model with fewer parameters while maintaining the model capacity. Experiments demonstrate that LDGCNs outperform state-of-the-art models on two benchmark datasets for AMR-to-text generation with significantly fewer parameters.
READ FULL TEXT
Comments
There are no comments yet.