A Toy Model of Universality: Reverse Engineering How Networks Learn Group Operations

02/06/2023
by   Bilal Chughtai, et al.
0

Universality is a key hypothesis in mechanistic interpretability – that different models learn similar features and circuits when trained on similar tasks. In this work, we study the universality hypothesis by examining how small neural networks learn to implement group composition. We present a novel algorithm by which neural networks may implement composition for any finite group via mathematical representation theory. We then show that networks consistently learn this algorithm by reverse engineering model logits and weights, and confirm our understanding using ablations. By studying networks of differing architectures trained on various groups, we find mixed evidence for universality: using our algorithm, we can completely characterize the family of circuits and features that networks learn on this task, but for a given network the precise circuits learned – as well as the order they develop – are arbitrary.

READ FULL TEXT
research
01/26/2021

Reverse Derivative Ascent: A Categorical Approach to Learning Boolean Circuits

We introduce Reverse Derivative Ascent: a categorical analogue of gradie...
research
01/12/2023

Progress measures for grokking via mechanistic interpretability

Neural networks often exhibit emergent behavior, where qualitatively new...
research
09/07/2022

Bispectral Neural Networks

We present a novel machine learning architecture, Bispectral Neural Netw...
research
02/19/2021

Parallel algorithms for power circuits and the word problem of the Baumslag group

Power circuits have been introduced in 2012 by Myasnikov, Ushakov and Wo...
research
04/28/2023

Towards Automated Circuit Discovery for Mechanistic Interpretability

Recent work in mechanistic interpretability has reverse-engineered nontr...
research
08/23/2022

AppGNN: Approximation-Aware Functional Reverse Engineering using Graph Neural Networks

The globalization of the Integrated Circuit (IC) market is attracting an...
research
04/20/2021

Neural Networks for Learning Counterfactual G-Invariances from Single Environments

Despite – or maybe because of – their astonishing capacity to fit data, ...

Please sign up or login with your details

Forgot password? Click here to reset