Previous research in multi-document news summarization has typically
con...
Through iterative, cross-disciplinary discussions, we define and propose...
Text simplification research has mostly focused on sentence-level
simpli...
With the recent appearance of LLMs in practical settings, having methods...
Interpretability and efficiency are two important considerations for the...
Modern news aggregators do the hard work of organizing a large news stre...
In long document controllable summarization, where labeled data is scarc...
Human evaluation is the foundation upon which the evaluation of both
sum...
State-of-the-art summarization models still struggle to be factually
con...
There are many potential benefits to news readers accessing diverse sour...
Pre-trained language models (PLMs) have been shown effective for zero-sh...
Prompt tuning approaches, which learn task-specific soft prompts for a
d...
We present Marvista – a human-AI collaborative tool that employs a suite...
Precisely assessing the progress in natural language generation (NLG) ta...
Question generation (QGen) models are often evaluated with standardized ...
Extracting structure information from dialogue data can help us better
u...
Factual consistency is an essential quality of text summarization models...
Query-focused summarization (QFS) aims to produce summaries that answer
...
Fact-checking is an essential tool to mitigate the spread of misinformat...
Asking good questions is an essential ability for both human and machine...
In this paper, we aim to improve abstractive dialogue summarization qual...
This paper introduces QAConv, a new question answering (QA) dataset that...
Existing dialogue state tracking (DST) models require plenty of labeled ...
This paper investigates pre-trained language models to find out which mo...
Intent detection is one of the core components of goal-oriented dialog
s...
Document interpretation and dialog understanding are the two major chall...
We present GraPPa, an effective pre-training approach for table semantic...
The goal of conversational machine reading is to answer user questions g...
Task-oriented dialogue is often decomposed into three tasks: understandi...
The use of pre-trained language models has emerged as a promising direct...
Dialogue systems require a great deal of different but complementary
exp...
Dialog State Tracking (DST) is a core component in task-oriented dialog
...
Training code-switched language models is difficult due to lack of data ...
Sensational headlines are headlines that capture people's attention and
...
User attributes provide rich and useful information for user understandi...
Existing personalized dialogue models use human designed persona descrip...
Over-dependence on domain ontology and lack of knowledge sharing across
...
In this thesis, we leverage the neural copy mechanism and memory-augment...
End-to-end task-oriented dialogue is challenging since knowledge bases a...
Speech recognition in mixed language has difficulties to adapt end-to-en...
Building large-scale datasets for training code-switching language model...
In this paper, we propose Emo2Vec which encodes emotional semantics into...
Lack of text data has been the major issue on code-switching language
mo...
We propose an LSTM-based model with hierarchical architecture on named e...
End-to-end task-oriented dialog systems usually suffer from the challeng...
Since the late 1990s when speech companies began providing their
custome...