On Robust Incremental Learning over Many Multilingual Steps

10/25/2022
by   Karan Praharaj, et al.
0

Recent work in incremental learning has introduced diverse approaches to tackle catastrophic forgetting from data augmentation to optimized training regimes. However, most of them focus on very few training steps. We propose a method for robust incremental learning over dozens of fine-tuning steps using data from a variety of languages. We show that a combination of data-augmentation and an optimized training regime allows us to continue improving the model even for as many as fifty training steps. Crucially, our augmentation strategy does not require retaining access to previous training data and is suitable in scenarios with privacy constraints.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2021

A Three Step Training Approach with Data Augmentation for Morphological Inflection

We present the BME submission for the SIGMORPHON 2021 Task 0 Part 1, Gen...
research
10/04/2018

Transfer Incremental Learning using Data Augmentation

Deep learning-based methods have reached state of the art performances, ...
research
03/05/2023

Effectiveness of Data Augmentation for Prefix Tuning with Limited Data

Recent work has demonstrated that tuning continuous prompts on large, fr...
research
10/17/2021

Reminding the Incremental Language Model via Data-Free Self-Distillation

Incremental language learning with pseudo-data can alleviate catastrophi...
research
11/18/2021

Self-Supervised Class Incremental Learning

Existing Class Incremental Learning (CIL) methods are based on a supervi...
research
03/09/2021

Uncertainty-aware Incremental Learning for Multi-organ Segmentation

Most existing approaches to train a unified multi-organ segmentation mod...
research
11/12/2019

Learning from Data-Rich Problems: A Case Study on Genetic Variant Calling

Next Generation Sequencing can sample the whole genome (WGS) or the 1-2 ...

Please sign up or login with your details

Forgot password? Click here to reset