Unified BERT for Few-shot Natural Language Understanding

06/24/2022
by   Junyu Lu, et al.
0

Even as pre-trained language models share a semantic encoder, natural language understanding suffers from a diversity of output schemas. In this paper, we propose UBERT, a unified bidirectional language understanding model based on BERT framework, which can universally model the training objects of different NLU tasks through a biaffine network. Specifically, UBERT encodes prior knowledge from various aspects, uniformly constructing learning representations across multiple NLU tasks, which is conducive to enhancing the ability to capture common semantic understanding. Using the biaffine to model scores pair of the start and end position of the original text, various classification and extraction structures can be converted into a universal, span-decoding approach. Experiments show that UBERT achieves the state-of-the-art performance on 7 NLU tasks, 14 datasets on few-shot and zero-shot setting, and realizes the unification of extensive information extraction and linguistic reasoning tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2022

Generating Training Data with Language Models: Towards Zero-Shot Language Understanding

Pretrained language models (PLMs) have demonstrated remarkable performan...
research
03/21/2023

Is BERT Blind? Exploring the Effect of Vision-and-Language Pretraining on Visual Language Understanding

Most humans use visual imagination to understand and reason about langua...
research
01/09/2023

Universal Information Extraction as Unified Semantic Matching

The challenge of information extraction (IE) lies in the diversity of la...
research
09/27/2021

FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding

The few-shot natural language understanding (NLU) task has attracted muc...
research
09/16/2019

Probing Natural Language Inference Models through Semantic Fragments

Do state-of-the-art models for language understanding already have, or c...
research
06/12/2019

CogCompTime: A Tool for Understanding Time in Natural Language Text

Automatic extraction of temporal information in text is an important com...
research
06/02/2021

Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?

Is it possible to use natural language to intervene in a model's behavio...

Please sign up or login with your details

Forgot password? Click here to reset