Limitations in learning an interpreted language with recurrent models

09/11/2018
by   Denis Paperno, et al.
0

In this submission I report work in progress on learning simplified interpreted languages by means of recurrent models. The data is constructed to reflect core properties of natural language as modeled in formal syntax and semantics: recursive syntactic structure and compositionality. Preliminary results suggest that LSTM networks do generalise to compositional interpretation, albeit only in the most favorable learning setting, with a well-paced curriculum, extensive training data, and left-to-right (but not right-to-left) composition.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/01/2021

Finite Representation Property for Relation Algebra Reducts

The decision problem of membership in the Representation Class of Relati...
research
03/13/2023

Meet in the Middle: A New Pre-training Paradigm

Most language models (LMs) are trained and applied in an autoregressive ...
research
04/25/2023

Nondeterministic Stacks in Neural Networks

Human language is full of compositional syntactic structures, and althou...
research
02/25/2019

Cooperative Learning of Disjoint Syntax and Semantics

There has been considerable attention devoted to models that learn to jo...
research
11/20/2018

A right-to-left type system for mutually-recursive value definitions

In call-by-value languages, some mutually-recursive value definitions ca...
research
04/15/2020

On the Linguistic Capacity of Real-Time Counter Automata

Counter machines have achieved a newfound relevance to the field of natu...
research
01/13/2022

The Combinatorics of Salva Veritate Principles

Various concepts of grammatical compositionality arise in many theories ...

Please sign up or login with your details

Forgot password? Click here to reset