Natural Language to Structured Query Generation via Meta-Learning

03/02/2018
by   Po-Sen Huang, et al.
0

In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function. When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/14/2019

Meta-Learning for Low-resource Natural Language Generation in Task-oriented Dialogue Systems

Natural language generation (NLG) is an essential component of task-orie...
research
04/08/2021

Support-Target Protocol for Meta-Learning

The support/query (S/Q) training protocol is widely used in meta-learnin...
research
05/22/2023

Improved Compositional Generalization by Generating Demonstrations for Meta-Learning

Meta-learning and few-shot prompting are viable methods to induce certai...
research
02/22/2021

Unsupervised Meta Learning for One Shot Title Compression in Voice Commerce

Product title compression for voice and mobile commerce is a well studie...
research
07/26/2018

Meta-learning autoencoders for few-shot prediction

Compared to humans, machine learning models generally require significan...
research
04/07/2020

Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning

We study the problem of generating inferential texts of events for a var...
research
11/20/2020

Meta Variational Monte Carlo

An identification is found between meta-learning and the problem of dete...

Please sign up or login with your details

Forgot password? Click here to reset