Discovering the Compositional Structure of Vector Representations with Role Learning Networks

10/21/2019
by   Paul Soulos, et al.
0

Neural networks (NNs) are able to perform tasks that rely on compositional structure even though they lack obvious mechanisms for representing this structure. To analyze the internal representations that enable such success, we propose ROLE, a technique that detects whether these representations implicitly encode symbolic structure. ROLE learns to approximate the representations of a target encoder E by learning a symbolic constituent structure and an embedding of that structure into E's representational vector space. The constituents of the approximating symbol structure are defined by structural positions — roles — that can be filled by symbols. We show that when E is constructed to explicitly embed a particular type of structure (string or tree), ROLE successfully extracts the ground-truth roles defining that structure. We then analyze a GRU seq2seq network trained to perform a more complex compositional task (SCAN), where there is no ground truth role scheme available. For this model, ROLE successfully discovers an interpretable symbolic structure that the model implicitly uses to perform the SCAN task, providing a comprehensive account of the representations that drive the behavior of a frequently-used but hard-to-interpret type of model. We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the model's resulting output is changed in the way predicted by our analysis. Finally, we use ROLE to explore whether popular sentence embedding models are capturing compositional structure and find evidence that they are not; we conclude by discussing how insights from ROLE can be used to impart new inductive biases to improve the compositional abilities of such models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2018

RNNs Implicitly Implement Tensor Product Representations

Recurrent neural networks (RNNs) can learn continuous vector representat...
research
06/06/2021

Causal Abstractions of Neural Networks

Structural analysis methods (e.g., probing and feature attribution) are ...
research
07/11/2017

SCAN: Learning Abstract Hierarchical Compositional Visual Concepts

The natural world is infinitely diverse, yet this diversity arises from ...
research
06/25/2020

Learning Task-General Representations with Generative Neuro-Symbolic Modeling

A hallmark of human intelligence is the ability to interact directly wit...
research
10/29/2018

Learning Distributed Representations of Symbolic Structure Using Binding and Unbinding Operations

Widely used recurrent units, including Long-short Term Memory (LSTM) and...
research
10/12/2022

CTL++: Evaluating Generalization on Never-Seen Compositional Patterns of Known Functions, and Compatibility of Neural Representations

Well-designed diagnostic tasks have played a key role in studying the fa...
research
06/01/2023

Differentiable Tree Operations Promote Compositional Generalization

In the context of structure-to-structure transformation tasks, learning ...

Please sign up or login with your details

Forgot password? Click here to reset