Feature Collapse

05/25/2023
by   Thomas Laurent, et al.
1

We formalize and study a phenomenon called feature collapse that makes precise the intuitive idea that entities playing a similar role in a learning task receive similar representations. As feature collapse requires a notion of task, we leverage a simple but prototypical NLP task to study it. We start by showing experimentally that feature collapse goes hand in hand with generalization. We then prove that, in the large sample limit, distinct words that play identical roles in this NLP task receive identical local feature representations in a neural network. This analysis reveals the crucial role that normalization mechanisms, such as LayerNorm, play in feature collapse and in generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/22/2019

From Community to Role-based Graph Embeddings

Roles are sets of structurally similar nodes that are more similar to no...
research
02/22/2017

Feature Generation for Robust Semantic Role Labeling

Hand-engineered feature sets are a well understood method for creating r...
research
10/27/2017

Spectral Graph Wavelets for Structural Role Similarity in Networks

Nodes residing in different parts of a graph can have similar structural...
research
10/20/2022

On Feature Learning in the Presence of Spurious Correlations

Deep classifiers are known to rely on spurious features x2013 patterns w...
research
04/20/2021

Role-Aware Modeling for N-ary Relational Knowledge Bases

N-ary relational knowledge bases (KBs) represent knowledge with binary a...
research
06/08/2020

Complexity for deep neural networks and other characteristics of deep feature representations

We define a notion of complexity, motivated by considerations of circuit...
research
01/23/2013

Bayesian Poker

Poker is ideal for testing automated reasoning under uncertainty. It int...

Please sign up or login with your details

Forgot password? Click here to reset