Recent studies have exhibited remarkable capabilities of pre-trained
mul...
A big convergence of language, multimodal perception, action, and world
...
Well-designed prompts can guide text-to-image models to generate amazing...
Large Transformers have achieved state-of-the-art performance across man...
In this paper, we elaborate upon recipes for building multilingual
repre...
Unsupervised question answering is an attractive task due to its indepen...
Foundation models have received much attention due to their effectivenes...
Sparse mixture of experts provides larger model capacity while requiring...
The success of pretrained cross-lingual language models relies on two
es...
In this paper, we introduce ELECTRA-style tasks to cross-lingual languag...
Fine-tuning pre-trained cross-lingual language models can transfer
task-...
The cross-lingual language models are typically pretrained with masked
l...
Multilingual T5 (mT5) pretrains a sequence-to-sequence model on massive
...
Recently, it has attracted much attention to build reliable named entity...
Multilingual machine translation enables a single model to translate bet...
In this work, we formulate cross-lingual language model pre-training as
...
Recently, open-domain dialogue systems have attracted growing attention....
Multilingual pretrained language models (such as multilingual BERT) have...
In this work we focus on transferring supervision signals of natural lan...
The task of table structure recognition aims to recognize the internal
s...