DeepAI AI Chat
Log In Sign Up

Synonymous Generalization in Sequence-to-Sequence Recurrent Networks

by   Ning Shi, et al.
Georgia Institute of Technology

When learning a language, people can quickly expand their understanding of the unknown content by using compositional skills, such as from two words "go" and "fast" to a new phrase "go fast." In recent work of Lake and Baroni (2017), modern Sequence-to-Sequence(se12seq) Recurrent Neural Networks (RNNs) can make powerful zero-shot generalizations in specifically controlled experiments. However, there is a missing regarding the property of such strong generalization and its precise requirements. This paper explores this positive result in detail and defines this pattern as the synonymous generalization, an ability to recognize an unknown sequence by decomposing the difference between it and a known sequence as corresponding existing synonyms. To better investigate it, I introduce a new environment called Colorful Extended Cleanup World (CECW), which consists of complex commands paired with logical expressions. While demonstrating that sequential RNNs can perform synonymous generalizations on foreign commands, I conclude their prerequisites for success. I also propose a data augmentation method, which is successfully verified on the Geoquery (GEO) dataset, as a novel application of synonymous generalization for real cases.


page 1

page 2

page 3

page 4


Compositional generalization through meta sequence-to-sequence learning

People can learn a new concept and use it compositionally, understanding...

Transcoding compositionally: using attention to find more generalizable solutions

While sequence-to-sequence models have shown remarkable generalization p...

Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora

Reflexive anaphora present a challenge for semantic interpretation: thei...

Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization

Can neural nets learn logic? We approach this classic question with curr...

Jump to better conclusions: SCAN both left and right

Lake and Baroni (2018) recently introduced the SCAN data set, which cons...