FewJoint: A Few-shot Learning Benchmark for Joint Language Understanding

09/17/2020
by   Yutai Hou, et al.
10

Few-learn learning (FSL) is one of the key future steps in machine learning and has raised a lot of attention. However, in contrast to the rapid development in other domains, such as Computer Vision, the progress of FSL in Nature Language Processing (NLP) is much slower. One of the key reasons for this is the lacking of public benchmarks. NLP FSL researches always report new results on their own constructed few-shot datasets, which is pretty inefficient in results comparison and thus impedes cumulative progress. In this paper, we present FewJoint, a novel Few-Shot Learning benchmark for NLP. Different from most NLP FSL research that only focus on simple N-classification problems, our benchmark introduces few-shot joint dialogue language understanding, which additionally covers the structure prediction and multi-task reliance problems. This allows our benchmark to reflect the real-word NLP complexity beyond simple N-classification. Our benchmark is used in the few-shot learning contest of SMP2020-ECDT task-1. We also provide a compatible FSL platform to ease experiment set-up.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2020

One-Shot Learning for Language Modelling

Humans can infer a great deal about the meaning of a word, using the syn...
research
11/04/2021

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

Most recent progress in natural language understanding (NLU) has been dr...
research
07/15/2021

FLEX: Unifying Evaluation for Few-Shot NLP

Few-shot NLP research is highly active, yet conducted in disjoint resear...
research
11/19/2019

Learning to Control Latent Representations for Few-Shot Learning of Named Entities

Humans excel in continuously learning with small data without forgetting...
research
12/03/2021

Evaluating NLP Systems On a Novel Cloze Task: Judging the Plausibility of Possible Fillers in Instructional Texts

Cloze task is a widely used task to evaluate an NLP system's language un...
research
02/09/2019

The Omniglot Challenge: A 3-Year Progress Report

Three years ago, we released the Omniglot dataset for developing more hu...
research
05/06/2023

ANTONIO: Towards a Systematic Method of Generating NLP Benchmarks for Verification

Verification of machine learning models used in Natural Language Process...

Please sign up or login with your details

Forgot password? Click here to reset