Analogs of Linguistic Structure in Deep Representations

07/25/2017
by   Jacob Andreas, et al.
0

We investigate the compositional structure of message vectors computed by a deep network trained on a communication game. By comparing truth-conditional representations of encoder-produced message vectors to human-produced referring expressions, we are able to identify aligned (vector, utterance) pairs with the same meaning. We then search for structured relationships among these aligned pairs to discover simple vector space transformations corresponding to negation, conjunction, and disjunction. Our results suggest that neural representations are capable of spontaneously developing a "syntax" with functional analogues to qualitative properties of natural language.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2010

Concrete Sentence Spaces for Compositional Distributional Models of Meaning

Coecke, Sadrzadeh, and Clark (arXiv:1003.4394v1 [cs.CL]) developed a com...
research
12/30/2014

From Logical to Distributional Models

The paper relates two variants of semantic models for natural language, ...
research
10/08/2022

Semantic Representations of Mathematical Expressions in a Continuous Vector Space

Mathematical notation makes up a large portion of STEM literature, yet, ...
research
06/06/2021

Causal Abstractions of Neural Networks

Structural analysis methods (e.g., probing and feature attribution) are ...
research
08/09/2016

Towards cross-lingual distributed representations without parallel text trained with adversarial autoencoders

Current approaches to learning vector representations of text that are c...
research
11/15/2017

Investigating Inner Properties of Multimodal Representation and Semantic Compositionality with Brain-based Componential Semantics

Multimodal models have been proven to outperform text-based approaches o...

Please sign up or login with your details

Forgot password? Click here to reset