Learning New Tasks from a Few Examples with Soft-Label Prototypes

10/31/2022
by   Avyav Kumar Singh, et al.
0

It has been experimentally demonstrated that humans are able to learn in a manner that allows them to make predictions on categories for which they have not seen any examples (Malaviya et al., 2022). Sucholutsky and Schonlau (2020) have recently presented a machine learning approach that aims to do the same. They utilise synthetically generated data and demonstrate that it is possible to achieve sub-linear scaling and develop models that can learn to recognise N classes from M training samples where M is less than N - aka less-than-one shot learning. Their method was, however, defined for univariate or simple multivariate data (Sucholutsky et al., 2021). We extend it to work on large, high-dimensional and real-world datasets and empirically validate it in this new and challenging setting. We apply this method to learn previously unseen NLP tasks from very few examples (4, 8 or 16). We first generate compact, sophisticated less-than-one shot representations called soft-label prototypes which are fitted on training data, capturing the distribution of different classes across the input domain space. We then use a modified k-Nearest Neighbours classifier to demonstrate that soft-label prototypes can classify data competitively, even outperforming much more computationally complex few-shot learning methods.

READ FULL TEXT

page 6

page 11

research
09/17/2020

'Less Than One'-Shot Learning: Learning N Classes From M<N Samples

Deep neural networks require large training sets but suffer from high co...
research
11/16/2017

Learning to Compare: Relation Network for Few-Shot Learning

We present a conceptually simple, flexible, and general framework for fe...
research
04/09/2023

A Note on "Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms"

Data valuation is a growing research field that studies the influence of...
research
07/22/2019

Relational Generalized Few-Shot Learning

Transferring learned models to novel tasks is a challenging problem, par...
research
11/22/2020

RNNP: A Robust Few-Shot Learning Approach

Learning from a few examples is an important practical aspect of trainin...
research
02/02/2022

Co-training Improves Prompt-based Learning for Large Language Models

We demonstrate that co-training (Blum Mitchell, 1998) can improve th...
research
12/30/2021

On the Role of Neural Collapse in Transfer Learning

We study the ability of foundation models to learn representations for c...

Please sign up or login with your details

Forgot password? Click here to reset