Linear Connectivity Reveals Generalization Strategies

05/24/2022
by   Jeevesh Juneja, et al.
3

It is widely accepted in the mode connectivity literature that when two neural networks are trained similarly on the same data, they are connected by a path through parameter space over which test set accuracy is maintained. Under some circumstances, including transfer learning from pretrained models, these paths are presumed to be linear. In contrast to existing results, we find that among text classifiers (trained on MNLI, QQP, and CoLA), some pairs of finetuned models have large barriers of increasing loss on the linear paths between them. On each task, we find distinct clusters of models which are linearly connected on the test loss surface, but are disconnected from models outside the cluster – models that occupy separate basins on the surface. By measuring performance on specially-crafted diagnostic datasets, we find that these clusters correspond to different generalization strategies: one cluster behaves like a bag of words model under domain shift, while another cluster uses syntactic heuristics. Our work demonstrates how the geometry of the loss surface can guide models towards different heuristic functions.

READ FULL TEXT

page 5

page 7

page 14

page 15

page 18

page 19

page 20

page 21

research
08/24/2023

Geodesic Mode Connectivity

Mode connectivity is a phenomenon where trained models are connected by ...
research
11/09/2020

Numerical Exploration of Training Loss Level-Sets in Deep Neural Networks

We present a computational method for empirically characterizing the tra...
research
11/15/2022

Mechanistic Mode Connectivity

Neural networks are known to be biased towards learning mechanisms that ...
research
02/07/2022

Deep Networks on Toroids: Removing Symmetries Reveals the Structure of Flat Regions in the Landscape Geometry

We systematize the approach to the investigation of deep neural network ...
research
09/05/2020

Optimizing Mode Connectivity via Neuron Alignment

The loss landscapes of deep neural networks are not well understood due ...
research
02/25/2021

Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling

With a better understanding of the loss surfaces for multilayer networks...

Please sign up or login with your details

Forgot password? Click here to reset