Compositionality-Aware Graph2Seq Learning

01/28/2022
by   Takeshi D. Itoh, et al.
0

Graphs are a highly expressive data structure, but it is often difficult for humans to find patterns from a complex graph. Hence, generating human-interpretable sequences from graphs have gained interest, called graph2seq learning. It is expected that the compositionality in a graph can be associated to the compositionality in the output sequence in many graph2seq tasks. Therefore, applying compositionality-aware GNN architecture would improve the model performance. In this study, we adopt the multi-level attention pooling (MLAP) architecture, that can aggregate graph representations from multiple levels of information localities. As a real-world example, we take up the extreme source code summarization task, where a model estimate the name of a program function from its source code. We demonstrate that the model having the MLAP architecture outperform the previous state-of-the-art model with more than seven times fewer parameters than it.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2022

Distribution Preserving Graph Representation Learning

Graph neural network (GNN) is effective to model graphs for distributed ...
research
06/17/2019

Learning Execution through Neural Code Fusion

As the performance of computer systems stagnates due to the end of Moore...
research
11/17/2021

GN-Transformer: Fusing Sequence and Graph Representation for Improved Code Summarization

As opposed to natural languages, source code understanding is influenced...
research
06/09/2020

Automatic Code Summarization via Multi-dimensional Semantic Fusing in GNN

Source code summarization aims to generate natural language summaries fr...
research
12/01/2021

Graph Conditioned Sparse-Attention for Improved Source Code Understanding

Transformer architectures have been successfully used in learning source...
research
09/07/2021

Software Vulnerability Detection via Deep Learning over Disaggregated Code Graph Representation

Identifying vulnerable code is a precautionary measure to counter softwa...
research
01/31/2023

Transformers Meet Directed Graphs

Transformers were originally proposed as a sequence-to-sequence model fo...

Please sign up or login with your details

Forgot password? Click here to reset