Exploring Fluent Query Reformulations with Text-to-Text Transformers and Reinforcement Learning

12/18/2020
by   Jerry Zikun Chen, et al.
0

Query reformulation aims to alter potentially noisy or ambiguous text sequences into coherent ones closer to natural language questions. In this process, it is also crucial to maintain and even enhance performance in a downstream environments like question answering when rephrased queries are given as input. We explore methods to generate these query reformulations by training reformulators using text-to-text transformers and apply policy-based reinforcement learning algorithms to further encourage reward learning. Query fluency is numerically evaluated by the same class of model fine-tuned on a human-evaluated well-formedness dataset. The reformulator leverages linguistic knowledge obtained from transfer learning and generates more well-formed reformulations than a translation-based model in qualitative and quantitative analysis. During reinforcement learning, it better retains fluency while optimizing the RL objective to acquire question answering rewards and can generalize to out-of-sample textual data in qualitative evaluations. Our RL framework is demonstrated to be flexible, allowing reward signals to be sourced from different downstream environments such as intent classification.

READ FULL TEXT
research
05/04/2017

Machine Comprehension by Text-to-Text Neural Question Generation

We propose a recurrent neural model that generates natural-language ques...
research
02/07/2020

Translating Web Search Queries into Natural Language Questions

Users often query a search engine with a specific question in mind and o...
research
12/25/2013

Description Logics based Formalization of Wh-Queries

The problem of Natural Language Query Formalization (NLQF) is to transla...
research
08/26/2022

Building the Intent Landscape of Real-World Conversational Corpora with Extractive Question-Answering Transformers

For companies with customer service, mapping intents inside their conver...
research
05/12/2022

Asking for Knowledge: Training RL Agents to Query External Knowledge Using Language

To solve difficult tasks, humans ask questions to acquire knowledge from...
research
04/12/2020

Explaining Question Answering Models through Text Generation

Large pre-trained language models (LMs) have been shown to perform surpr...
research
11/11/2021

CubeTR: Learning to Solve The Rubiks Cube Using Transformers

Since its first appearance, transformers have been successfully used in ...

Please sign up or login with your details

Forgot password? Click here to reset