To Understand Representation of Layer-aware Sequence Encoders as Multi-order-graph

01/16/2021
by   Sufeng Duan, et al.
0

In this paper, we propose a unified explanation of representation for layer-aware neural sequence encoders, which regards the representation as a revisited multigraph called multi-order-graph (MoG), so that model encoding can be viewed as a processing to capture all subgraphs in MoG. The relationship reflected by Multi-order-graph, called n-order dependency, can present what existing simple directed graph explanation cannot present. Our proposed MoG explanation allows to precisely observe every step of the generation of representation, put diverse relationship such as syntax into a unifiedly depicted framework. Based on the proposed MoG explanation, we further propose a graph-based self-attention network empowered Graph-Transformer by enhancing the ability of capturing subgraph information over the current models. Graph-Transformer accommodates different subgraphs into different groups, which allows model to focus on salient subgraphs. Result of experiments on neural machine translation tasks show that the MoG-inspired model can yield effective performance improvement.

READ FULL TEXT
research
09/16/2020

Graph-to-Sequence Neural Machine Translation

Neural machine translation (NMT) usually works in a seq2seq learning way...
research
06/04/2019

Lattice-Based Transformer Encoder for Neural Machine Translation

Neural machine translation (NMT) takes deterministic sequences for sourc...
research
12/27/2020

SG-Net: Syntax Guided Transformer for Language Representation

Understanding human language is one of the key themes of artificial inte...
research
04/30/2020

Capsule-Transformer for Neural Machine Translation

Transformer hugely benefits from its key design of the multi-head self-a...
research
05/20/2022

Towards Explanation for Unsupervised Graph-Level Representation Learning

Due to the superior performance of Graph Neural Networks (GNNs) in vario...
research
09/05/2019

Source Dependency-Aware Transformer with Supervised Self-Attention

Recently, Transformer has achieved the state-of-the-art performance on m...
research
09/22/2020

Towards Causal Explanation Detection with Pyramid Salient-Aware Network

Causal explanation analysis (CEA) can assist us to understand the reason...

Please sign up or login with your details

Forgot password? Click here to reset