Adaptive Meta-learner via Gradient Similarity for Few-shot Text Classification

09/10/2022
by   Tianyi Lei, et al.
0

Few-shot text classification aims to classify the text under the few-shot scenario. Most of the previous methods adopt optimization-based meta learning to obtain task distribution. However, due to the neglect of matching between the few amount of samples and complicated models, as well as the distinction between useful and useless task features, these methods suffer from the overfitting issue. To address this issue, we propose a novel Adaptive Meta-learner via Gradient Similarity (AMGS) method to improve the model generalization ability to a new task. Specifically, the proposed AMGS alleviates the overfitting based on two aspects: (i) acquiring the potential semantic representation of samples and improving model generalization through the self-supervised auxiliary task in the inner loop, (ii) leveraging the adaptive meta-learner via gradient similarity to add constraints on the gradient obtained by base-learner in the outer loop. Moreover, we make a systematic analysis of the influence of regularization on the entire framework. Experimental results on several benchmarks demonstrate that the proposed AMGS consistently improves few-shot text classification performance compared with the state-of-the-art optimization-based meta-learning approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2020

Few-Shot Classification By Few-Iteration Meta-Learning

Learning in a low-data regime from only a few labeled examples is an imp...
research
04/12/2020

Self-Supervised Tuning for Few-Shot Segmentation

Few-shot segmentation aims at assigning a category label to each image p...
research
02/22/2020

Incorporating Effective Global Information via Adaptive Gate Attention for Text Classification

The dominant text classification studies focus on training classifiers u...
research
05/12/2020

Dynamic Memory Induction Networks for Few-Shot Text Classification

This paper proposes Dynamic Memory Induction Networks (DMIN) for few-sho...
research
06/01/2021

Distribution Matching for Rationalization

The task of rationalization aims to extract pieces of input text as rati...
research
06/14/2023

Improving Generalization in Meta-Learning via Meta-Gradient Augmentation

Meta-learning methods typically follow a two-loop framework, where each ...
research
06/01/2022

Dataset Distillation using Neural Feature Regression

Dataset distillation aims to learn a small synthetic dataset that preser...

Please sign up or login with your details

Forgot password? Click here to reset