DeepAI AI Chat
Log In Sign Up

Disentangled Sequence to Sequence Learning for Compositional Generalization

10/09/2021
by   Hao Zheng, et al.
0

There is mounting evidence that existing neural network models, in particular the very popular sequence-to-sequence architecture, struggle with compositional generalization, i.e., the ability to systematically generalize to unseen compositions of seen components. In this paper we demonstrate that one of the reasons hindering compositional generalization relates to the representations being entangled. We propose an extension to sequence-to-sequence models which allows us to learn disentangled representations by adaptively re-encoding (at each time step) the source input. Specifically, we condition the source representations on the newly decoded target context which makes it easier for the encoder to exploit specialized information for each prediction rather than capturing all source information in a single forward pass. Experimental results on semantic parsing and machine translation empirically show that our proposal yields more disentangled representations and better generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/12/2022

Real-World Compositional Generalization with Disentangled Sequence-to-Sequence Learning

Compositional generalization is a basic mechanism in human language lear...
10/22/2020

Compositional Generalization via Semantic Tagging

Although neural sequence-to-sequence models have been successfully appli...
05/20/2023

Learn to Compose Syntactic and Semantic Representations Appropriately for Compositional Generalization

Recent studies have shown that sequence-to-sequence (Seq2Seq) models are...
06/04/2019

Transcoding compositionally: using attention to find more generalizable solutions

While sequence-to-sequence models have shown remarkable generalization p...
04/05/2023

Correcting Flaws in Common Disentanglement Metrics

Recent years have seen growing interest in learning disentangled represe...
11/28/2022

Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality

Recent datasets expose the lack of the systematic generalization ability...
10/24/2022

Structural generalization is hard for sequence-to-sequence models

Sequence-to-sequence (seq2seq) models have been successful across many N...