Syntactic Structure from Deep Learning

04/22/2020
by   Tal Linzen, et al.
0

Modern deep neural networks achieve impressive performance in engineering applications that require extensive linguistic skills, such as machine translation. This success has sparked interest in probing whether these models are inducing human-like grammatical knowledge from the raw data they are exposed to, and, consequently, whether they can shed new light on long-standing debates concerning the innate structure necessary for language acquisition. In this article, we survey representative studies of the syntactic abilities of deep networks, and discuss the broader implications that this work has for theoretical linguistics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2021

On the proper role of linguistically-oriented deep net analysis in linguistic theorizing

A lively research field has recently emerged that uses experimental meth...
research
06/06/2023

Language acquisition: do children and language models follow similar learning stages?

During language acquisition, children follow a typical sequence of learn...
research
06/09/2022

Meet You Halfway: Explaining Deep Learning Mysteries

Deep neural networks perform exceptionally well on various learning task...
research
10/01/2020

Examining the rhetorical capacities of neural language models

Recently, neural language models (LMs) have demonstrated impressive abil...
research
04/27/2020

PuzzLing Machines: A Challenge on Learning From Small Data

Deep neural models have repeatedly proved excellent at memorizing surfac...
research
06/05/2017

Deep learning evaluation using deep linguistic processing

We discuss problems with the standard approaches to evaluation for tasks...
research
11/01/2020

Deep Learning for Text Attribute Transfer: A Survey

Driven by the increasingly larger deep learning models, neural language ...

Please sign up or login with your details

Forgot password? Click here to reset