Can neural networks acquire a structural bias from raw linguistic data?

07/14/2020
by   Alex Warstadt, et al.
0

We evaluate whether BERT, a widely used neural network for sentence processing, acquires an inductive bias towards forming structural generalizations through pretraining on raw data. We conduct four experiments testing its preference for structural vs. linear generalizations in different structure-dependent phenomena. We find that BERT makes a structural generalization in 3 out of 4 empirical domains—subject-auxiliary inversion, reflexive binding, and verb tense detection in embedded clauses—but makes a linear generalization when tested on NPI licensing. We argue that these results are the strongest evidence so far from artificial learners supporting the proposition that a structural bias can be acquired from raw data. If this conclusion is correct, it is tentative evidence that some linguistic universals can be acquired by learners without innate biases. However, the precise implications for human language acquisition are unclear, as humans learn language from significantly less data than BERT.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset