Continuous Entailment Patterns for Lexical Inference in Context

09/08/2021
by   Martin Schmitt, et al.
0

Combining a pretrained language model (PLM) with textual patterns has been shown to help in both zero- and few-shot settings. For zero-shot performance, it makes sense to design patterns that closely resemble the text seen during self-supervised pretraining because the model has never seen anything else. Supervised training allows for more flexibility. If we allow for tokens outside the PLM's vocabulary, patterns can be adapted more flexibly to a PLM's idiosyncrasies. Contrasting patterns where a "token" can be any continuous vector vs. those where a discrete choice between vocabulary elements has to be made, we call our method CONtinuous pAtterNs (CONAN). We evaluate CONAN on two established benchmarks for lexical inference in context (LIiC) a.k.a. predicate entailment, a challenging natural language understanding task with relatively small training sets. In a direct comparison with discrete patterns, CONAN consistently leads to improved performance, setting a new state of the art. Our experiments give valuable insights into the kind of pattern that enhances a PLM's performance on LIiC and raise important questions regarding our understanding of PLMs using text patterns.

READ FULL TEXT

page 3

page 8

research
02/10/2021

Language Models for Lexical Inference in Context

Lexical inference in context (LIiC) is the task of recognizing textual e...
research
05/26/2023

Entailment as Robust Self-Learner

Entailment has been recognized as an important metric for evaluating nat...
research
04/22/2022

Zero and Few-shot Learning for Author Profiling

Author profiling classifies author characteristics by analyzing how lang...
research
10/19/2022

Continued Pretraining for Better Zero- and Few-Shot Promptability

Recently introduced language model prompting methods can achieve high ac...
research
01/18/2023

Class Enhancement Losses with Pseudo Labels for Zero-shot Semantic Segmentation

Recent mask proposal models have significantly improved the performance ...
research
05/13/2023

SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative Examples

Detecting negatives (such as non-entailment relationships, unanswerable ...

Please sign up or login with your details

Forgot password? Click here to reset