Tabula nearly rasa: Probing the Linguistic Knowledge of Character-Level Neural Language Models Trained on Unsegmented Text

06/17/2019
by   Michael Hahn, et al.
0

Recurrent neural networks (RNNs) have reached striking performance in many natural language processing tasks. This has renewed interest in whether these generic sequence processing devices are inducing genuine linguistic knowledge. Nearly all current analytical studies, however, initialize the RNNs with a vocabulary of known words, and feed them tokenized input during training. We present a multi-lingual study of the linguistic knowledge encoded in RNNs trained as character-level language models, on input data with word boundaries removed. These networks face a tougher and more cognitively realistic task, having to discover any useful linguistic unit from scratch based on input statistics. The results show that our "near tabula rasa" RNNs are mostly able to solve morphological, syntactic and semantic tasks that intuitively presuppose word-level knowledge, and indeed they learned, to some extent, to track word boundaries. Our study opens the door to speculations about the necessity of an explicit, rigid word lexicon in language learning and usage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2018

Indicatements that character language models learn English morpho-syntactic units and regularities

Character language models have access to surface morphological patterns,...
research
09/18/2019

Subword ELMo

Embedding from Language Models (ELMo) has shown to be effective for impr...
research
01/22/2019

Deep learning and sub-word-unit approach in written art generation

Automatic poetry generation is novel and interesting application of natu...
research
10/06/2022

Are word boundaries useful for unsupervised language learning?

Word or word-fragment based Language Models (LM) are typically preferred...
research
06/22/2018

Evaluating language models of tonal harmony

This study borrows and extends probabilistic language models from natura...
research
02/29/2016

Representation of linguistic form and function in recurrent neural networks

We present novel methods for analyzing the activation patterns of RNNs f...
research
05/16/2018

Learning to Write with Cooperative Discriminators

Recurrent Neural Networks (RNNs) are powerful autoregressive sequence mo...

Please sign up or login with your details

Forgot password? Click here to reset