DeepAI AI Chat
Log In Sign Up

Meta-learning for fast cross-lingual adaptation in dependency parsing

04/10/2021
by   Anna Langedijk, et al.
14

Meta-learning, or learning to learn, is a technique that can help to overcome resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to new tasks. We apply model-agnostic meta-learning (MAML) to the task of cross-lingual dependency parsing. We train our model on a diverse set of languages to learn a parameter initialization that can adapt quickly to new languages. We find that meta-learning with pre-training can significantly improve upon the performance of language transfer and standard supervised learning baselines for a variety of unseen, typologically diverse, and low-resource languages, in a few-shot learning setup.

READ FULL TEXT
04/28/2020

Learning to Learn Morphological Inflection for Resource-Poor Languages

We propose to cast the task of morphological inflection - mapping a lemm...
03/04/2023

Self-tuning hyper-parameters for unsupervised cross-lingual tokenization

We explore the possibility of meta-learning for the language-independent...
10/17/2022

Improving Low-Resource Cross-lingual Parsing with Expected Statistic Regularization

We present Expected Statistic Regularization (ESR), a novel regularizati...
06/02/2021

Minimax and Neyman-Pearson Meta-Learning for Outlier Languages

Model-agnostic meta-learning (MAML) has been recently put forth as a str...
01/26/2021

Few-Shot Semantic Parsing for New Predicates

In this work, we investigate the problems of semantic parsing in a few-s...
12/24/2020

Cross-lingual Dependency Parsing as Domain Adaptation

In natural language processing (NLP), cross-lingual transfer learning is...

Code Repositories

multilingual-interference

This repository contains code for a project about tackling negative interference in multilingual meta-learning setup.


view repo