DeepAI AI Chat
Log In Sign Up

Natural Language to Structured Query Generation via Meta-Learning

by   Po-Sen Huang, et al.
Microsoft, Inc.
University of Washington
Allen Institute for Artificial Intelligence

In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function. When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1


page 1

page 2

page 3

page 4


Meta-Learning for Low-resource Natural Language Generation in Task-oriented Dialogue Systems

Natural language generation (NLG) is an essential component of task-orie...

Support-Target Protocol for Meta-Learning

The support/query (S/Q) training protocol is widely used in meta-learnin...

Improved Compositional Generalization by Generating Demonstrations for Meta-Learning

Meta-learning and few-shot prompting are viable methods to induce certai...

Unsupervised Meta Learning for One Shot Title Compression in Voice Commerce

Product title compression for voice and mobile commerce is a well studie...

Meta-learning autoencoders for few-shot prediction

Compared to humans, machine learning models generally require significan...

Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning

We study the problem of generating inferential texts of events for a var...

Meta Variational Monte Carlo

An identification is found between meta-learning and the problem of dete...