While large language models (LLMs) exhibit impressive language understan...
Autonomous agents empowered by Large Language Models (LLMs) have undergo...
Despite the advancements of open-source large language models (LLMs) and...
Instruction tuning has emerged as a promising approach to enhancing larg...
Parameter-efficient tuning (PET) methods can effectively drive extremely...
Large Language Models (LLMs) have demonstrated significant progress in
u...
Fine-tuning on instruction data has been widely validated as an effectiv...
Continual pre-training is the paradigm where pre-trained language models...
Long-form question answering (LFQA) aims at answering complex, open-ende...
Humans possess an extraordinary ability to create and utilize tools, all...
How humans infer discrete emotions is a fundamental research question in...
Recent years have witnessed the prevalent application of pre-trained lan...
Delta tuning (DET, also known as parameter-efficient tuning) is deemed a...
Current pre-trained language models (PLM) are typically trained with sta...
Prompt tuning (PT) is a promising parameter-efficient method to utilize
...
How can pre-trained language models (PLMs) learn universal representatio...
Recent explorations of large-scale pre-trained language models (PLMs) su...
Pre-trained Language Models (PLMs) have shown strong performance in vari...
Pre-trained Language Models (PLMs) have proven to be beneficial for vari...
Deep neural networks usually require massive labeled data, which restric...
Deep neural networks usually require massive labeled data, which restric...