SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics

06/02/2021
by   Hitomi Yanaka, et al.
0

Recently, deep neural networks (DNNs) have achieved great success in semantically challenging NLP tasks, yet it remains unclear whether DNN models can capture compositional meanings, those aspects of meaning that have been long studied in formal semantics. To investigate this issue, we propose a Systematic Generalization testbed based on Natural language Semantics (SyGNS), whose challenge is to map natural language sentences to multiple forms of scoped meaning representations, designed to account for various semantic phenomena. Using SyGNS, we test whether neural networks can systematically parse sentences involving novel combinations of logical expressions such as quantifiers and negation. Experiments show that Transformer and GRU models can generalize to unseen combinations of quantifiers, negations, and modifiers that are similar to given training instances in form, but not to the others. We also find that the generalization performance to unseen combinations is better when the form of meaning representations is simpler. The data and code for SyGNS are publicly available at https://github.com/verypluming/SyGNS.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/13/2016

Natural Language Semantics and Computability

This paper is a reflexion on the computability of natural language seman...
research
05/08/2020

Probing Linguistic Systematicity

Recently, there has been much interest in the question of whether deep n...
research
10/16/2021

Improving Compositional Generalization with Self-Training for Data-to-Text Generation

Data-to-text generation focuses on generating fluent natural language re...
research
09/11/2020

Systematic Generalization on gSCAN with Language Conditioned Embedding

Systematic Generalization refers to a learning algorithm's ability to ex...
research
10/20/2020

ConjNLI: Natural Language Inference Over Conjunctive Sentences

Reasoning about conjuncts in conjunctive sentences is important for a de...
research
04/22/2021

Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand?

Language models trained on billions of tokens have recently led to unpre...
research
08/27/2016

Learning to generalize to new compositions in image understanding

Recurrent neural networks have recently been used for learning to descri...

Please sign up or login with your details

Forgot password? Click here to reset