DeepAI AI Chat
Log In Sign Up

Improving Text Auto-Completion with Next Phrase Prediction

by   Dong-Ho Lee, et al.
University of Southern California
Singapore University of Technology and Design

Language models such as GPT-2 have performed well on constructing syntactically sound sentences for text auto-completion task. However, such models often require considerable training effort to adapt to specific writing domains (e.g., medical). In this paper, we propose an intermediate training strategy to enhance pre-trained language models' performance in the text auto-completion task and fastly adapt them to specific domains. Our strategy includes a novel self-supervised training objective called Next Phrase Prediction (NPP), which encourages a language model to complete the partial query with enriched phrases and eventually improve the model's text auto-completion performance. Preliminary experiments have shown that our approach is able to outperform the baselines in auto-completion for email and academic writing domains.


page 1

page 2

page 3

page 4


Subword Language Model for Query Auto-Completion

Current neural query auto-completion (QAC) systems rely on character-lev...

Effidit: Your AI Writing Assistant

In this technical report, we introduce Effidit (Efficient and Intelligen...

FLAME: A small language model for spreadsheet formulas

The widespread use of spreadsheet environments by billions of users pres...

Query Auto Completion for Math Formula Search

Query Auto Completion (QAC) is among the most appealing features of a we...

Visual Natural Language Query Auto-Completion for Estimating Instance Probabilities

We present a new task of query auto-completion for estimating instance p...

ConTextual Mask Auto-Encoder for Dense Passage Retrieval

Dense passage retrieval aims to retrieve the relevant passages of a quer...

Enabling Language Models to Fill in the Blanks

We present a simple approach for text infilling, the task of predicting ...