DeepAI AI Chat
Log In Sign Up

Meta-learning for fast cross-lingual adaptation in dependency parsing

by   Anna Langedijk, et al.

Meta-learning, or learning to learn, is a technique that can help to overcome resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to new tasks. We apply model-agnostic meta-learning (MAML) to the task of cross-lingual dependency parsing. We train our model on a diverse set of languages to learn a parameter initialization that can adapt quickly to new languages. We find that meta-learning with pre-training can significantly improve upon the performance of language transfer and standard supervised learning baselines for a variety of unseen, typologically diverse, and low-resource languages, in a few-shot learning setup.


Learning to Learn Morphological Inflection for Resource-Poor Languages

We propose to cast the task of morphological inflection - mapping a lemm...

Self-tuning hyper-parameters for unsupervised cross-lingual tokenization

We explore the possibility of meta-learning for the language-independent...

Improving Low-Resource Cross-lingual Parsing with Expected Statistic Regularization

We present Expected Statistic Regularization (ESR), a novel regularizati...

Minimax and Neyman-Pearson Meta-Learning for Outlier Languages

Model-agnostic meta-learning (MAML) has been recently put forth as a str...

Few-Shot Semantic Parsing for New Predicates

In this work, we investigate the problems of semantic parsing in a few-s...

Cross-lingual Dependency Parsing as Domain Adaptation

In natural language processing (NLP), cross-lingual transfer learning is...

Code Repositories


This repository contains code for a project about tackling negative interference in multilingual meta-learning setup.

view repo