Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items

08/31/2018
by   Jaap Jumelet, et al.
0

In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguis- tics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset