Synonymous Generalization in Sequence-to-Sequence Recurrent Networks

03/14/2020
by   Ning Shi, et al.
0

When learning a language, people can quickly expand their understanding of the unknown content by using compositional skills, such as from two words "go" and "fast" to a new phrase "go fast." In recent work of Lake and Baroni (2017), modern Sequence-to-Sequence(se12seq) Recurrent Neural Networks (RNNs) can make powerful zero-shot generalizations in specifically controlled experiments. However, there is a missing regarding the property of such strong generalization and its precise requirements. This paper explores this positive result in detail and defines this pattern as the synonymous generalization, an ability to recognize an unknown sequence by decomposing the difference between it and a known sequence as corresponding existing synonyms. To better investigate it, I introduce a new environment called Colorful Extended Cleanup World (CECW), which consists of complex commands paired with logical expressions. While demonstrating that sequential RNNs can perform synonymous generalizations on foreign commands, I conclude their prerequisites for success. I also propose a data augmentation method, which is successfully verified on the Geoquery (GEO) dataset, as a novel application of synonymous generalization for real cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2017

Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks

Humans can understand and produce new utterances effortlessly, thanks to...
research
06/12/2019

Compositional generalization through meta sequence-to-sequence learning

People can learn a new concept and use it compositionally, understanding...
research
06/04/2019

Transcoding compositionally: using attention to find more generalizable solutions

While sequence-to-sequence models have shown remarkable generalization p...
research
11/02/2020

Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora

Reflexive anaphora present a challenge for semantic interpretation: thei...
research
06/01/2019

Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization

Can neural nets learn logic? We approach this classic question with curr...
research
09/12/2018

Jump to better conclusions: SCAN both left and right

Lake and Baroni (2018) recently introduced the SCAN data set, which cons...
research
05/21/2019

CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks

Lake and Baroni (2018) introduced the SCAN dataset probing the ability o...

Please sign up or login with your details

Forgot password? Click here to reset