Given BM25's enduring competitiveness as an information retrieval baseli...
We investigate the ability of transformer models to approximate the CKY
...
This paper introduces the shared task of summarizing documents in severa...
While coreference resolution is defined independently of dataset domain,...
We introduce SummScreen, a summarization dataset comprised of pairs of T...
Transformer language models have made tremendous strides in natural lang...
We propose to tackle conditional text generation tasks, especially those...
Datasets for data-to-text generation typically focus either on multi-dom...
Most prior work on exemplar-based syntactically controlled paraphrase
ge...
Long document coreference resolution remains a challenging task due to t...
While much work on deep latent variable models of text uses continuous l...
We propose to train a non-autoregressive machine translation model to
mi...
We propose learning discrete structured representations from unlabeled d...
We propose to learn deep undirected graphical models (i.e., MRFs), with ...
Retrieve-and-edit based approaches to structured prediction, where struc...
Prior work on controllable text generation usually assumes that the
cont...
We propose a generative model for a sentence that uses two latent variab...
There has been much recent, exciting work on combining the complementary...
Reading comprehension tasks test the ability of models to process long-t...
While neural, encoder-decoder models have had significant empirical succ...
Amortized variational inference (AVI) replaces instance-specific local
i...
Recent neural models have shown significant progress on the problem of
g...
While Truncated Back-Propagation through Time (BPTT) is the most popular...
Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
...
There is compelling evidence that coreference prediction would benefit f...