Large language models (LLMs) have shown impressive performance in follow...
Text simplification research has mostly focused on sentence-level
simpli...
State-of-the-art summarization models still struggle to be factually
con...
Deep learning models for natural language processing (NLP) are increasin...
Error analysis in NLP models is essential to successful model developmen...
Query-focused summarization (QFS) aims to produce summaries that answer
...
Neural abstractive summarization models are susceptible to generating
fa...
Novel neural architectures, training strategies, and the availability of...
Despite impressive performance on standard benchmarks, deep neural netwo...
For protein sequence datasets, unlabeled data has greatly outpaced label...
Transformer architectures have proven to learn useful representations fo...
Common methods for interpreting neural models in natural language proces...
The Transformer is a sequence model that forgoes traditional recurrent
a...
The Transformer is a fully attention-based alternative to recurrent netw...
We present an open-source tool for visualizing multi-head self-attention...
We present an open-source tool for visualizing multi-head self-attention...