Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models

03/19/2022
by   Ryokan Ri, et al.
0

We investigate what kind of structural knowledge learned in neural network encoders is transferable to processing natural language. We design artificial languages with structural properties that mimic natural language, pretrain encoders on the data, and see how much performance the encoder exhibits on downstream tasks in natural language. Our experimental results show that pretraining with an artificial language with a nesting dependency structure provides some knowledge transferable to natural language. A follow-up probing analysis indicates that its success in the transfer is related to the amount of encoded contextual information and what is transferred is the knowledge of position-aware context dependence of language. Our results provide insights into how neural network encoders process human languages and the source of cross-lingual transferability of recent multilingual language models.

READ FULL TEXT

page 1

page 6

page 7

page 9

research
09/23/2021

Cross-Lingual Language Model Meta-Pretraining

The success of pretrained cross-lingual language models relies on two es...
research
06/15/2022

Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems

We present results from a large-scale experiment on pretraining encoders...
research
04/30/2020

Pretraining on Non-linguistic Structure as a Tool for Analyzing Learning Bias in Language Models

We propose a novel methodology for analyzing the encoding of grammatical...
research
10/03/2020

Mining Knowledge for Natural Language Inference from Wikipedia Categories

Accurate lexical entailment (LE) and natural language inference (NLI) of...
research
05/24/2022

Analyzing the Mono- and Cross-Lingual Pretraining Dynamics of Multilingual Language Models

The emergent cross-lingual transfer seen in multilingual pretrained mode...
research
11/02/2020

Emergent Communication Pretraining for Few-Shot Machine Translation

While state-of-the-art models that rely upon massively multilingual pret...
research
03/08/2021

Meta-Learning with MAML on Trees

In meta-learning, the knowledge learned from previous tasks is transferr...

Please sign up or login with your details

Forgot password? Click here to reset