Towards Linguistically Informed Multi-Objective Pre-Training for Natural Language Inference

12/14/2022
by   Maren Pielka, et al.
0

We introduce a linguistically enhanced combination of pre-training methods for transformers. The pre-training objectives include POS-tagging, synset prediction based on semantic knowledge graphs, and parent prediction based on dependency parse trees. Our approach achieves competitive results on the Natural Language Inference task, compared to the state of the art. Specifically for smaller models, the method results in a significant performance boost, emphasizing the fact that intelligent pre-training can make up for fewer parameters and help building more efficient models. Combining POS-tagging and synset prediction yields the overall best results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2023

Automating Code-Related Tasks Through Transformers: The Impact of Pre-training

Transformers have gained popularity in the software engineering (SE) lit...
research
10/04/2020

On Losses for Modern Language Models

BERT set many state-of-the-art results over varied NLU benchmarks by pre...
research
09/06/2019

Uncertain Natural Language Inference

We propose a refinement of Natural Language Inference (NLI), called Unce...
research
01/25/2022

Do Transformers Encode a Foundational Ontology? Probing Abstract Classes in Natural Language

With the methodological support of probing (or diagnostic classification...
research
07/31/2023

Structural Transfer Learning in NL-to-Bash Semantic Parsers

Large-scale pre-training has made progress in many fields of natural lan...
research
05/23/2022

Informed Pre-Training on Prior Knowledge

When training data is scarce, the incorporation of additional prior know...
research
11/05/2020

Training Transformers for Information Security Tasks: A Case Study on Malicious URL Prediction

Machine Learning (ML) for information security (InfoSec) utilizes distin...

Please sign up or login with your details

Forgot password? Click here to reset