Universal linguistic inductive biases via meta-learning

06/29/2020
by   R. Thomas McCoy, et al.
0

How do learners acquire languages from the limited data available to them? This process must involve some inductive biases - factors that affect how a learner generalizes - but it is unclear which inductive biases can explain observed patterns in language acquisition. To facilitate computational modeling aimed at addressing this question, we introduce a framework for giving particular linguistic inductive biases to a neural network model; such a model can then be used to empirically explore the effects of those inductive biases. This framework disentangles universal inductive biases, which are encoded in the initial values of a neural network's parameters, from non-universal factors, which the neural network must learn from data in a given language. The initial state that encodes the inductive biases is found with meta-learning, a technique through which a model discovers how to acquire new languages more easily via exposure to many possible languages. By controlling the properties of the languages that are used during meta-learning, we can control the inductive biases that meta-learning imparts. We demonstrate this framework with a case study based on syllable structure. First, we specify the inductive biases that we intend to give our model, and then we translate those inductive biases into a space of languages from which a model can meta-learn. Finally, using existing analysis techniques, we verify that our approach has imparted the linguistic inductive biases that it was intended to impart.

READ FULL TEXT

page 2

page 3

research
05/23/2022

Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines

Strong inductive biases are a key component of human intelligence, allow...
research
05/24/2023

Modeling rapid language learning by distilling Bayesian priors into artificial neural networks

Humans can learn languages from remarkably little experience. Developing...
research
04/07/2022

Equivariance Discovery by Learned Parameter-Sharing

Designing equivariance as an inductive bias into deep-nets has been a pr...
research
12/06/2021

Noether Networks: Meta-Learning Useful Conserved Quantities

Progress in machine learning (ML) stems from a combination of data avail...
research
09/15/2021

Target Languages (vs. Inductive Biases) for Learning to Act and Plan

Recent breakthroughs in AI have shown the remarkable power of deep learn...
research
05/31/2020

Transferring Inductive Biases through Knowledge Distillation

Having the right inductive biases can be crucial in many tasks or scenar...
research
12/06/2021

Input-level Inductive Biases for 3D Reconstruction

Much of the recent progress in 3D vision has been driven by the developm...

Please sign up or login with your details

Forgot password? Click here to reset