BURT: BERT-inspired Universal Representation from Learning Meaningful Segment

12/28/2020
by   Yian Li, et al.
0

Although pre-trained contextualized language models such as BERT achieve significant performance on various downstream tasks, current language representation still only focuses on linguistic objective at a specific granularity, which may not applicable when multiple levels of linguistic units are involved at the same time. Thus this work introduces and explores the universal representation learning, i.e., embeddings of different levels of linguistic unit in a uniform vector space. We present a universal representation model, BURT (BERT-inspired Universal Representation from learning meaningful segmenT), to encode different levels of linguistic unit into the same vector space. Specifically, we extract and mask meaningful segments based on point-wise mutual information (PMI) to incorporate different granular objectives into the pre-training stage. We conduct experiments on datasets for English and Chinese including the GLUE and CLUE benchmarks, where our model surpasses its baselines and alternatives on a wide range of downstream tasks. We present our approach of constructing analogy datasets in terms of words, phrases and sentences and experiment with multiple representation models to examine geometric properties of the learned vector space through a task-independent evaluation. Finally, we verify the effectiveness of our unified pre-training strategy in two real-world text matching scenarios. As a result, our model significantly outperforms existing information retrieval (IR) methods and yields universal representations that can be directly applied to retrieval-based question-answering and natural language generation tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2020

Learning Universal Representations from Word to Sentence

Despite the well-developed cut-edge representation learning for language...
research
05/30/2021

Pre-training Universal Language Representation

Despite the well-developed cut-edge representation learning for language...
research
04/29/2020

Learning Better Universal Representations from Pre-trained Contextualized Language Models

Pre-trained contextualized language models such as BERT have shown great...
research
03/06/2022

Divide and Conquer: Text Semantic Matching with Disentangled Keywords and Intents

Text semantic matching is a fundamental task that has been widely used i...
research
08/13/2019

StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding

Recently, the pre-trained language model, BERT (Devlin et al.(2018)Devli...
research
01/24/2022

Cobol2Vec: Learning Representations of Cobol code

There has been a steadily growing interest in development of novel metho...
research
12/12/2022

A Pre-Trained BERT Model for Android Applications

The automation of an increasingly large number of software engineering t...

Please sign up or login with your details

Forgot password? Click here to reset