Parameter-efficient tuning (PET) has been widely explored in recent year...
This work examines the presence of modularity in pre-trained Transformer...
Injecting external knowledge can improve the performance of pre-trained
...
Large-scale pre-trained models (PTMs) have been widely used in
document-...
Recent research demonstrates that external knowledge injection can advan...
Question Answering (QA) is the task of automatically answering questions...
For many real-world applications, the user-generated inputs usually cont...
Transformer-based pre-trained language models have demonstrated superior...
Prompting, which casts downstream applications as language modeling task...
Entity linking aims to link ambiguous mentions to their corresponding
en...
The rapid development of deep natural language processing (NLP) models f...
Recent works have shown promising results of prompt tuning in stimulatin...
Transformer-based pre-trained language models can achieve superior
perfo...
Pre-Trained Vision-Language Models (VL-PTMs) have shown promising
capabi...
Conventional tokenization methods for Chinese pretrained language models...
Recent explorations of large-scale pre-trained language models (PLMs) su...
Backdoor attacks are a kind of insidious security threat against machine...
Fine-tuning pre-trained language models (PLMs) has demonstrated its
effe...
Due to the success of pre-trained models (PTMs), people usually fine-tun...
Pre-trained Language Models (PLMs) have proven to be beneficial for vari...
Several recent efforts have been devoted to enhancing pre-trained langua...
Recently, pre-trained language models mostly follow the
pre-training-the...
While adversarial games have been well studied in various board games an...
Neural language representation models such as BERT pre-trained on large-...
There is recently a surge in approaches that learn low-dimensional embed...
Lots of learning tasks require dealing with graph data which contains ri...