Generating Derivational Morphology with BERT

05/02/2020
by   Valentin Hofmann, et al.
0

Can BERT generate derivationally complex words? We present the first study investigating this question. We find that BERT with a derivational classification layer outperforms an LSTM-based model. Furthermore, our experiments show that the input segmentation crucially impacts BERT's derivational knowledge, both during training and inference.

READ FULL TEXT
research
01/02/2021

Superbizarre Is Not Superb: Improving BERT's Interpretations of Complex Words with Derivational Morphology

How does the input segmentation of pretrained language models (PLMs) aff...
research
07/08/2022

ABB-BERT: A BERT model for disambiguating abbreviations and contractions

Abbreviations and contractions are commonly found in text across differe...
research
10/16/2019

Content Enhanced BERT-based Text-to-SQL Generation

We present a simple methods to leverage the table content for the BERT-b...
research
11/03/2021

BERT-DRE: BERT with Deep Recursive Encoder for Natural Language Sentence Matching

This paper presents a deep neural architecture, for Natural Language Sen...
research
05/12/2021

Better than BERT but Worse than Baseline

This paper compares BERT-SQuAD and Ab3P on the Abbreviation Definition I...
research
04/09/2022

FoundationLayerNorm: Scaling BERT and GPT to 1,000 Layers

The mainstream BERT/GPT model contains only 10 to 20 layers, and there i...
research
02/22/2021

Using Prior Knowledge to Guide BERT's Attention in Semantic Textual Matching Tasks

We study the problem of incorporating prior knowledge into a deep Transf...

Please sign up or login with your details

Forgot password? Click here to reset