Improving Cross-task Generalization of Unified Table-to-text Models with Compositional Task Configurations

12/17/2022
by   Jifan Chen, et al.
0

There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022). However, existing methods typically encode task information with a simple dataset name as a prefix to the encoder. This not only limits the effectiveness of multi-task learning, but also hinders the model's ability to generalize to new domains or tasks that were not seen during training, which is crucial for real-world applications. In this paper, we propose compositional task configurations, a set of prompts prepended to the encoder to improve cross-task generalization of unified models. We design the task configurations to explicitly specify the task type, as well as its input and output types. We show that this not only allows the model to better learn shared knowledge across different tasks at training, but also allows us to control the model by composing new configurations that apply novel input-output combinations in a zero-shot manner. We demonstrate via experiments over ten table-to-text tasks that our method outperforms the UnifiedSKG baseline by noticeable margins in both in-domain and zero-shot settings, with average improvements of +0.5 and +12.6 from using a T5-large backbone, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2021

Leveraging Table Content for Zero-shot Text-to-SQL with Meta-Learning

Single-table text-to-SQL aims to transform a natural language question i...
research
05/19/2023

Domain Generalization Deep Graph Transformation

Graph transformation that predicts graph transition from one mode to ano...
research
03/12/2019

Few-Shot and Zero-Shot Learning for Historical Text Normalization

Historical text normalization often relies on small training datasets. R...
research
06/01/2020

Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas

We demonstrate a reinforcement learning agent which uses a compositional...
research
02/20/2022

Mining Robust Default Configurations for Resource-constrained AutoML

Automatic machine learning (AutoML) is a key enabler of the mass deploym...
research
11/25/2022

A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation

The rise of generalist large-scale models in natural language and vision...
research
09/28/2018

Using Multi-task and Transfer Learning to Solve Working Memory Tasks

We propose a new architecture called Memory-Augmented Encoder-Solver (MA...

Please sign up or login with your details

Forgot password? Click here to reset