DeepAI
Log In Sign Up

Making Transformers Solve Compositional Tasks

08/09/2021
by   Santiago Ontañón, et al.
0

Several studies have reported the inability of Transformer models to generalize compositionally, a key type of generalization in many NLP tasks such as semantic parsing. In this paper we explore the design space of Transformer models showing that the inductive biases given to the model by several design decisions significantly impact compositional generalization. Through this exploration, we identified Transformer configurations that generalize compositionally significantly better than previously reported in the literature in a diverse set of compositional tasks, and that achieve state-of-the-art results in a semantic parsing compositional generalization benchmark (COGS), and a string edit operation composition benchmark (PCFG).

READ FULL TEXT
02/20/2022

Understanding Robust Generalization in Learning Regular Languages

A key feature of human intelligence is the ability to generalize beyond ...
12/01/2021

Systematic Generalization with Edge Transformers

Recent research suggests that systematic generalization in natural langu...
01/30/2022

Compositionality as Lexical Symmetry

Standard deep network models lack the inductive biases needed to general...
11/09/2021

Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks

Neural network models often generalize poorly to mismatched domains or d...
10/10/2020

Compressing Transformer-Based Semantic Parsing Models using Compositional Code Embeddings

The current state-of-the-art task-oriented semantic parsing models use B...
10/08/2021

Iterative Decoding for Compositional Generalization in Transformers

Deep learning models do well at generalizing to in-distribution data but...
09/29/2022

Compositional Semantic Parsing with Large Language Models

Humans can reason compositionally when presented with new tasks. Previou...