GiBERT: Introducing Linguistic Knowledge into BERT through a Lightweight Gated Injection Method

10/23/2020
by   Nicole Peinelt, et al.
0

Large pre-trained language models such as BERT have been the driving force behind recent improvements across many NLP tasks. However, BERT is only trained to predict missing words - either behind masks or in the next sentence - and has no knowledge of lexical, syntactic or semantic information beyond what it picks up through unsupervised pre-training. We propose a novel method to explicitly inject linguistic knowledge in the form of word embeddings into any layer of a pre-trained BERT. Our performance improvements on multiple semantic similarity datasets when injecting dependency-based and counter-fitted embeddings indicate that such information is beneficial and currently missing from the original model. Our qualitative analysis shows that counter-fitted embedding injection particularly helps with cases involving synonym pairs.

READ FULL TEXT

Authors

page 4

11/02/2020

On the Sentence Embeddings from Pre-trained Language Models

Pre-trained contextual representations like BERT have achieved great suc...
06/04/2022

Comparing Performance of Different Linguistically-Backed Word Embeddings for Cyberbullying Detection

In most cases, word embeddings are learned only from raw tokens or in so...
09/17/2020

Compositional and Lexical Semantics in RoBERTa, BERT and DistilBERT: A Case Study on CoQA

Many NLP tasks have benefited from transferring knowledge from contextua...
10/05/2020

On the Effects of Knowledge-Augmented Data in Word Embeddings

This paper investigates techniques for knowledge injection into word emb...
12/23/2019

Probing the phonetic and phonological knowledge of tones in Mandarin TTS models

This study probes the phonetic and phonological knowledge of lexical ton...
03/15/2022

Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost

State-of-the-art NLP systems represent inputs with word embeddings, but ...
11/24/2020

Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis

As the name implies, contextualized representations of language are typi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.