We propose a new paradigm for universal information extraction (IE) that...
Bidirectional Encoder Representations from Transformers or
BERT <cit.> h...
We propose a new paradigm for zero-shot learners that is format agnostic...
This report describes a pre-trained language model Erlangshen with
prope...
Even as pre-trained language models share a semantic encoder, natural
la...
Pretrained language models have served as important backbones for natura...