State-of-the-art encoder-decoder models (e.g. for machine translation (M...
We introduce CM3, a family of causally masked generative models trained ...
In order to address the increasing demands of real-world applications, t...
With the rise of large-scale pre-trained language models, open-domain
qu...
We present VideoCLIP, a contrastive approach to pre-train a unified mode...
We introduce HTLM, a hyper-text language model trained on a large-scale ...
We review the EfficientQA competition from NeurIPS 2020. The competition...
We study open-domain question answering (ODQA) with structured, unstruct...
We introduce fairseq S2T, a fairseq extension for speech-to-text (S2T)
m...
Supervised ASR models have reached unprecedented levels of accuracy, tha...
The recent success of transformer networks for neural machine translatio...