Towards Zero-shot Language Modeling

08/06/2021
by   Edoardo Maria Ponti, et al.
0

Can we construct a neural model that is inductively biased towards learning human languages? Motivated by this question, we aim at constructing an informative prior over neural weights, in order to adapt quickly to held-out languages in the task of character-level language modeling. We infer this distribution from a sample of typologically diverse training languages via Laplace approximation. The use of such a prior outperforms baseline models with an uninformative prior (so-called "fine-tuning") in both zero-shot and few-shot settings. This shows that the prior is imbued with universal phonological knowledge. Moreover, we harness additional language-specific side information as distant supervision for held-out languages. Specifically, we condition language models on features from typological databases, by concatenating them to hidden states or generating weights with hyper-networks. These features appear beneficial in the few-shot setting, but not in the zero-shot setting. Since the paucity of digital texts affects the majority of the world's languages, we hope that these findings will help broaden the scope of applications for language technology.

READ FULL TEXT
research
12/19/2022

BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting

The BLOOM model is a large open-source multilingual language model capab...
research
05/12/2022

Multi Task Learning For Zero Shot Performance Prediction of Multilingual Models

Massively Multilingual Transformer based Language Models have been obser...
research
10/16/2021

A Good Prompt Is Worth Millions of Parameters? Low-resource Prompt-based Learning for Vision-Language Models

Large pretrained vision-language (VL) models can learn a new task with a...
research
04/19/2023

MasakhaNEWS: News Topic Classification for African languages

African languages are severely under-represented in NLP research due to ...
research
05/24/2022

Hyper-X: A Unified Hypernetwork for Multi-Task Multilingual Transfer

Massively multilingual models are promising for transfer learning across...
research
05/16/2022

Heroes, Villains, and Victims, and GPT-3: Automated Extraction of Character Roles Without Training Data

This paper shows how to use large-scale pre-trained language models to e...
research
10/17/2018

Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations

Temporal models based on recurrent neural networks have proven to be qui...

Please sign up or login with your details

Forgot password? Click here to reset