DeepAI AI Chat
Log In Sign Up

Information Flow in Pregroup Models of Natural Language

by   Peter M. Hines, et al.
University of York

This paper is about pregroup models of natural languages, and how they relate to the explicitly categorical use of pregroups in Compositional Distributional Semantics and Natural Language Processing. These categorical interpretations make certain assumptions about the nature of natural languages that, when stated formally, may be seen to impose strong restrictions on pregroup grammars for natural languages. We formalize this as a hypothesis about the form that pregroup models of natural languages must take, and demonstrate by an artificial language example that these restrictions are not imposed by the pregroup axioms themselves. We compare and contrast the artificial language examples with natural languages (using Welsh, a language where the 'noun' type cannot be taken as primitive, as an illustrative example). The hypothesis is simply that there must exist a causal connection, or information flow, between the words of a sentence in a language whose purpose is to communicate information. This is not necessarily the case with formal languages that are simply generated by a series of 'meaning-free' rules. This imposes restrictions on the types of pregroup grammars that we expect to find in natural languages; we formalize this in algebraic, categorical, and graphical terms. We take some preliminary steps in providing conditions that ensure pregroup models satisfy these conjectured properties, and discuss the more general forms this hypothesis may take.


page 1

page 2

page 3

page 4


Quantifier Scope in Categorical Compositional Distributional Semantics

In previous work with J. Hedges, we formalised a generalised quantifiers...

The Combinatorics of Salva Veritate Principles

Various concepts of grammatical compositionality arise in many theories ...

Learning Families of Formal Languages from Positive and Negative Information

For 50 years, research in the area of inductive inference aims at invest...

Applying Distributional Compositional Categorical Models of Meaning to Language Translation

The aim of this paper is twofold: first we will use vector space distrib...

Learning Languages in the Limit from Positive Information with Finitely Many Memory Changes

We investigate learning collections of languages from texts by an induct...

Evolution of grammatical forms: some quantitative approaches

Grammatical forms are said to evolve via two main mechanisms. These are,...

Semantics, Representations and Grammars for Deep Learning

Deep learning is currently the subject of intensive study. However, fund...