This paper reexamines the research on out-of-distribution (OOD) robustne...
Textual adversarial attacks can discover models' weaknesses by adding
se...
Humans possess an extraordinary ability to create and utilize tools, all...
With the evergrowing sizes of pre-trained models (PTMs), it has been an
...
Metric-based meta-learning is one of the de facto standards in few-shot
...
Pre-trained language models (PLMs) achieve remarkable performance on man...
Textual adversarial samples play important roles in multiple subfields o...
Textual backdoor attacks are a kind of practical threat to NLP systems. ...
Prompt-based learning paradigm bridges the gap between pre-training and
...
Prompt-based tuning for pre-trained language models (PLMs) has shown its...
The recent emergence of contrastive learning approaches facilitates the
...
Attributed graph embedding, which learns vector representations from gra...
Lots of learning tasks require dealing with graph data which contains ri...