Nuclear Magnetic Resonance (NMR) spectroscopy has served as a powerful
a...
Due to the unbalanced training data distribution, the language ability o...
Neural machine translation has achieved promising results on many transl...
Instruction tuning has significantly advanced large language models (LLM...
Recently, Pretrained Language Models (PLMs) have been serving as
general...
Language models (LMs) gradually become general-purpose interfaces in the...
Multilingual understanding models (or encoder-based), pre-trained via ma...
Previous studies have shown that large language models (LLMs) like GPTs ...
Generative Language Models (GLMs) have demonstrated capabilities to stor...
Large language models (LLMs) have demonstrated remarkable potential in
h...
With promising yet saturated results in high-resource settings, low-reso...
In recent years, In-context Learning (ICL) has gained increasing attenti...
Neural speaker embeddings encode the speaker's speech characteristics th...
With the increasing ability of large language models (LLMs), in-context
...
Traditional multilingual neural machine translation (MNMT) uses a single...
With increasing scale, large language models demonstrate both quantitati...
Answering complex questions over textual resources remains a challenging...
Previous literature has proved that Pretrained Language Models (PLMs) ca...
The physics informed neural networks (PINNs) has been widely utilized to...
Speaker adaptation is important to build robust automatic speech recogni...
How do masked language models (MLMs) such as BERT learn contextual
repre...
Many existing neural architecture search (NAS) solutions rely on downstr...
In recent years, larger and deeper models are springing up and continuou...
The recently proposed conformer architecture has been successfully used ...
We introduce MTG, a new benchmark suite for training and evaluating
mult...
Sequence-to-sequence (seq2seq) problems such as machine translation are
...
It is well accepted that the choice of token vocabulary largely affects ...
Long text generation is an important but challenging task.The main probl...
In sequence to sequence learning, the self-attention mechanism proves to...
Layer normalization (LayerNorm) is a technique to normalize the distribu...
Commonsense question answering aims to answer questions which require
ba...
We study fact-checking in this paper, which aims to verify a textual cla...
Chinese word segmentation (CWS) is a fundamental step of Chinese natural...
Automatic article commenting is helpful in encouraging user engagement a...
Cross-lingual word embeddings aim to capture common linguistic regularit...
Conversational semantic parsing over tables requires knowledge acquiring...
Automatic evaluation of semantic rationality is an important yet challen...
Generating semantically coherent responses is still a major challenge in...
This paper explores a new natural language processing task, review-drive...
The task of sentiment modification requires reversing the sentiment of t...
Narrative story generation is a challenging problem because it demands t...
The goal of sentiment-to-sentiment "translation" is to change the underl...
Existing text generation methods tend to produce repeated and "boring"
e...
Named Entity Recognition and Relation Extraction for Chinese literature ...
In recent years, neural networks have proven to be effective in Chinese ...
In recent years, more research has been devoted to studying the subtask ...
Recently, encoder-decoder models are widely used in social media text
su...
As traditional neural network consumes a significant amount of computing...
Current Chinese social media text summarization models are based on an
e...
Recent studies have shown effectiveness in using neural networks for Chi...