It Ain't That Bad: Understanding the Mysterious Performance Drop in OOD Generalization for Generative Transformer Models

08/16/2023
by   Xingcheng Xu, et al.
0

Generative Transformer-based models have achieved remarkable proficiency on solving diverse problems. However, their generalization ability is not fully understood and not always satisfying. Researchers take basic mathematical tasks like n-digit addition or multiplication as important perspectives for investigating their generalization behaviors. Curiously, it is observed that when training on n-digit operations (e.g., additions) in which both input operands are n-digit in length, models generalize successfully on unseen n-digit inputs (in-distribution (ID) generalization), but fail miserably and mysteriously on longer, unseen cases (out-of-distribution (OOD) generalization). Studies try to bridge this gap with workarounds such as modifying position embedding, fine-tuning, and priming with more extensive or instructive data. However, without addressing the essential mechanism, there is hardly any guarantee regarding the robustness of these solutions. We bring this unexplained performance drop into attention and ask whether it is purely from random errors. Here we turn to the mechanistic line of research which has notable successes in model interpretability. We discover that the strong ID generalization stems from structured representations, while behind the unsatisfying OOD performance, the models still exhibit clear learned algebraic structures. Specifically, these models map unseen OOD inputs to outputs with equivalence relations in the ID domain. These highlight the potential of the models to carry useful information for improved generalization.

READ FULL TEXT

page 4

page 7

research
11/17/2021

Understanding and Testing Generalization of Deep Networks on Out-of-Distribution Data

Deep network models perform excellently on In-Distribution (ID) data, bu...
research
11/15/2022

On the Compositional Generalization Gap of In-Context Learning

Pretrained large generative language models have shown great performance...
research
10/19/2022

Domain generalization Person Re-identification on Attention-aware multi-operation strategery

Domain generalization person re-identification (DG Re-ID) aims to direct...
research
06/25/2022

Adversarial Self-Attention for Language Understanding

An ultimate language system aims at the high generalization and robustne...
research
11/23/2022

Relating Regularization and Generalization through the Intrinsic Dimension of Activations

Given a pair of models with similar training set performance, it is natu...
research
06/15/2020

Neural Execution Engines: Learning to Execute Subroutines

A significant effort has been made to train neural networks that replica...
research
04/05/2020

Detecting and Understanding Generalization Barriers for Neural Machine Translation

Generalization to unseen instances is our eternal pursuit for all data-d...

Please sign up or login with your details

Forgot password? Click here to reset