We present the results of the NLP Community Metasurvey. Run from May to ...
Summarization datasets are often assembled either by scraping naturally
...
Transformer-based models generally allocate the same amount of computati...
Noisy channel models have been especially effective in neural machine
tr...
To enable building and testing models on long-document comprehension, we...
We aim to renew interest in a particular multi-document summarization (M...
Recent years have seen numerous NLP datasets introduced to evaluate the
...
Current approaches to text generation largely rely on autoregressive mod...
We propose to train a non-autoregressive machine translation model to
mi...
While pretrained models such as BERT have shown large gains across natur...
Despite strong performance on a variety of tasks, neural sequence models...
Deep energy-based models are powerful, but pose challenges for learning ...
The difficulty of textual style transfer lies in the lack of parallel
co...