Large Language Models (LLMs), such as ChatGPT and GPT-4, have revolution...
In natural language processing, pre-trained language models have become
...
Pre-trained language models achieve superior performance, but they are
c...
The same multi-word expressions may have different meanings in different...
This paper describes our system designed for SemEval-2022 Task 8:
Multil...
Pre-trained language models have been prevailed in natural language
proc...
Pre-trained Language Models (PLMs) have been widely used in various natu...
Multilingual pre-trained language models (MPLMs) not only can handle tas...
Multilingual pre-trained language models have shown impressive performan...
Adversarial training (AT) as a regularization method has proved its
effe...
Multilingual pre-trained models have achieved remarkable transfer perfor...
Machine Reading Comprehension (MRC) is an important testbed for evaluati...
Owing to the continuous contributions by the Chinese NLP community, more...
In this paper, we introduce TextBrewer, an open-source knowledge distill...
Graph convolutional network (GCN) is now an effective tool to deal with
...
Adversarial training (AT) as a regularization method has proved its
effe...
This paper studies recommender systems with knowledge graphs, which can
...
Bidirectional Encoder Representations from Transformers (BERT) has shown...