The Combinatorics of Salva Veritate Principles

01/13/2022
by   Norman E. Trushaev, et al.
0

Various concepts of grammatical compositionality arise in many theories of both natural and artificial languages, and often play a key role in accounts of the syntax-semantics interface. We propose that many instances of compositionality should entail non-trivial combinatorial claims about the expressive power of languages which satisfy these compositional properties. As an example, we present a formal analysis demonstrating that a particular class of languages which admit salva vertitate substitutions - a property which we claim to be a particularly strong example of compositional principle - must also satisfy a very natural combinatorial constraint identified in this paper.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/08/2018

Information Flow in Pregroup Models of Natural Language

This paper is about pregroup models of natural languages, and how they r...
06/18/2020

Compositional theories for embedded languages

Embedded programming style allows to split the syntax in two parts, repr...
10/28/2020

Measuring non-trivial compositionality in emergent communication

Compositionality is an important explanatory target in emergent communic...
07/01/2019

Coherence of Type Class Resolution

Elaboration-based type class resolution, as found in languages like Hask...
06/16/2004

Notions of Equivalence in Software Design

Design methods in information systems frequently create software descrip...
07/15/2019

Logic Conditionals, Supervenience, and Selection Tasks

Principles of cognitive economy would require that concepts about object...
03/14/2018

One Net Fits All: A unifying semantics of Dynamic Fault Trees using GSPNs

Dynamic Fault Trees (DFTs) are a prominent model in reliability engineer...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

This essay will present a formal framework and some preliminary results concerning the combinatorial properties of meaning-preserving substitutions of sentences. Following Quine (and ultimately Leibniz), we refer to such operations as salva veritate substitutions. 222It should be mentioned that Quine was primarily concerned with truth-value preserving substitutions (hence the veritate bit), whereas we will be more broadly concerned with meaning preserving substitutions, in general, of which the truth-value preserving substitutions are a proper subset.

The relevant properties of salva veritate substitutions are formalized in an abstract property that we introduce in the essay, and which we later call (SST). This property is itself a particular instance (and a very strong one) of a more general class of linguistic constraints that might be referred to as compositionality principles, where, by ”compositionality principle”, we simply mean any property that might be possessed by a language in virtue of the fact that it exhibits some form of grammatical compositionality.

Our discussion will be divided into three parts. In Part 1, we briefly present some historical and conceptual background information which will be used to motivate our subsequent investigations. Part 2 is devoted to constructing our formal theory and presenting the main result. Finally, in Part 3, we conclude with a discussion of the preceding findings, and some remarks on directions for future work.

1 Historical and Conceptual Background

By a ”substitution” we mean any operation on strings of a language which replaces some sub-string with another. The simplest example would be any operation which replaces some word occurring in a sentence with some other word, but one can also consider substitutions of arbitrary syntactic constituents. Such substitution operations have played an important role in many debates in linguistics and philosophy, and have been particularly important in the analysis of intensional phenomenon, synonymy, and analyticity.

The literature on these matters is extensive, and a full review is beyond the scope of this article. A good starting point, however, is Quine’s classic ”Two Dogmas of Empiricism” ([11]). Quine asks us to consider a typical analytic statement

Quine suggests that a core feature of such statements is that they can be converted into logical truths by ”putting synonyms for synonyms” (Quine, p. 23). In this particular case, if we substitute the phrase ”unmarried man” for ”bachelor”, we obtain the logically tautological sentence

In effect, analyticity of then rests on synonymy of the linguistic forms ”unmarried man” and ”bachelor”, and the fact that is a logical truth. The explanatory burden now rests on finding a satisfactory account of linguistic synonymy. Regarding this matter, Quine points out (p. 27) that ”A natural suggestion, deserving close examination, is that the synonymy of two linguistic forms consists simply in their interchangeability in all contexts without change of truth value - interchangeability, in Leibniz’s phrase, salva veritate.”

This suggestion forms the starting point of our investigation. We, however, will not be concerned with examining the philosophical, semantic, or logical properties of salva veritate substitutions, but rather their combinatorial properties, a line of investigation which remains largely unexplored.

Although, perhaps not the conventional view, salve veritate substitutions can be viewed as a specific instance of compositionality. Compositionality is a property exhibited by certain linguistic structures, whereby the meaning of any complex expression is determined by the meaning of it’s component parts, and the manner in which these component parts are combined ([10]). Compositionality has been an important subject of investigation in linguistic and logical scholarship since at least the work of Frege in the 19th century ([1], [3], [10]), and continues to be an active area of inquiry today (see, e.g. [2], [4])

2 Formal Machinery

For our purposes, we consider an interpreted language, by which we mean a set of strings over some alphabet, and a rule for assigning ”meanings” to these strings. Formally, we define an interpreted language to be an ordered tuple

where denotes the alphabet, denotes the set of well-formed strings of the language (i.e. the things we can assign interpretations to), denotes the interpretation function, which assigns meanings to strings, and is our set of meanings. In order to provide a fully general account, we make no assumptions about . Our ”meanings”, may therefore consist of anything, including truth-values, propositions, concepts, sentences of a meta-language, etc. For our purposes, we further assume that the alphabet is finite.

We also introduce an additional constraint on . We call this constraint ”substitutability of synonymous terms” (SST). Formally we have

Definition (Substitutability of Synonymous Terms).

Let be well-formed strings satisfying the relation . Then for any well-formed string , the string is also well-formed, and .

Informally, this condition states that if strings have the same meaning, then the operation of substituting for is well-defined for all strings of our language , and the result of performing such an operation has no effect on meaning. It is important to note that such a condition places both a syntactic and a semantic constraint on . The syntactic component guarantees that replacing a substring that occurs in any well-formed string , with a synonymous substring , always produces another well-formed string . The semantic component guarantees that such substitutions have no impact on meaning, i.e. that synonymous constituents make the same semantic contribution to any linguistic context in which they occur.

In addition to (SST), we will require another constraint, which we call ”inductive constructibility” (IC). Formally we have

Definition (Inductive Constructibility).

Let be an interpreted language. Then we say that satisfies inductive constructibility iff any well-formed string of length is equal to the concatenation of two strings which satisfy the relation .

Informally, this condition states that any non-trivial string (i.e. a string of length ) must be composed of smaller well-formed substrings. This prevents the existence of strings that have non-trivial substrings which themselves are not well-formed. Hence, any non-trivial well-formed strings can be constructed by concatenating smaller well-formed strings. In particular, an inductive argument will immediately show that for , any string of length is equal to some string concatenation of the form where and .

Before proceeding, we introduce some additional terminology that will be needed. Given subsets of well-formed strings , we say than is (strictly) more expressive that iff is a (proper) subset of . Furthermore, for any subset we define the expressive power of to be equal to . We define the expressive power of an interpreted language to be equal to the expressive power of . Given a natural number , we define the -th generation of to be the set of all well-formed strings of length .

We are now ready for the main result:

Theorem.

An interpreted language which satisfies (SST) and (IC) has infinite expressive power iff gen is strictly more expressive than gen for all .

Proof.

Let be an interpreted language which satisfies (SST) and (IC). We then have two directions to prove.
() Let have infinite expressive power, and suppose for contradiction that gen is not strictly more expressive gen for some . Then , i.e. and have equal expressive power. In particular, we see that the expressive power of is finite and equal to

If has infinite expressive power, then there exist meanings such that . Now fix any such meaning , and let be a string of minimal length that is assigned meaning . By (IC), can be expressed in the form for of lengths and . Given that has length , there exists a string such that . By (SST), it follows that

Now, since , this contradicts our assumption that is the shortest possible length of any string that expresses meaning . Hence, the proper inclusion

holds for all .
() Suppose that is strictly more expressive that . Then for all , there exists a meaning that is expressible by a string of length , but not by any string of length . Taking the union

we then obtain an infinite collection of meanings that is contained in i.e. . Hence has infinite expressive power.

3 Conclusion

The theorem of section 2 demonstrates a close connection between the admissible substitutions of a language, and its expressive power. In effect, this provides a practical demonstration of the fact that at least some non-trivial semantic information about a given language can be obtained purely by examining the meaning-preserving syntactic operations which are permitted by the language under consideration. Furthermore, this suggests that questions concerning the syntax-semantics interface, and the relationship between form and meaning, may have a non-trivial combinatorial component that is entirely independent of any particular semantic interpretation.

The constraints placed on in this paper, however, require additional refinement in order to be suitable for the description of most interesting examples of interpreted languages. Substitutability of synonymous terms (SST) and inductive constructibility (IC) are rather strong assumptions to make about a language. Such assumptions may in fact be perfectly innocuous and realistic assumptions for the analysis of many formal languages, but for many languages of interest, especially in the case natural languages, it seems plausible that (SST) will need to replaced - substituted, if you will - by a weaker constraint on substitutability. Well known intensional phenomenon, such as those identified and investigated by Russell, Kripke, Montague, Partee, ([9], [5], [6], [7]) and others, demonstrate conditions under which meaning is not preserved under substitutions of synonymous terms. In some cases, this is likely to require weakening our assumption regarding the preservation of well-formedness under substitution. In other cases, we will likely have to weaken our assumption that meaning remains identical under such substitutions. Many applications will likely require some combination of both.

(IC) also appears to be quite problematic for many natural languages, for a variety of related reasons. Sentences of natural language are not, in general, composed of smaller sub-sentences. In the language of section 2, this means that the non-trivial well-formed strings of a natural language may not be composed of smaller well-formed substrings - which was a crucial property in our proof of the theorem presented in section 2. This seems to be closely related to the fact that most grammars of natural languages construct sentences not on the basis of concatenation of simpler formula, but rather on the basis of grammatical relations of either dependency (in the case of dependency grammars) or constituency (in the case of phrase structure grammars).

Although significantly more complicated, such grammars do nevertheless impose relations of grammatical hierarchy and interdependence on the various syntactic forms of their respective languages. Under such relations, certain syntactic forms may be seen as more or less primitive in relation to others. This opens the door to extending our proof strategy used in our proof of the main result of section 2, to languages with more complicated grammatical relations. For our purposes, the crucial feature of (IC) is that it allowed us to relate certain properties of proper substrings to the properties of the well-formed strings in which they occur. In a similar fashion, we may hope to identify methods of relating the expressive properties of more primitive syntactic forms to the expressive properties of the more complex syntactic forms in which they occur.

Complications aside, there is at least one area in which future investigations are likely to be considerably simpler. In an attempt to construct the most general possible theory, we have made no assumptions about the nature of . However, the semantics of most languages, whether they be formal or natural, generally allows for some additional structure on . In most cases, this will be some sort of logical, set theoretic, or algebraic structure. Whatever the case may be, this structure will furnish us with additional relations (i.e. constraints) between the various meanings and syntactic forms countenanced by the language. In general, the stronger these constraints, the more we can say about the combinatorial relations between the syntax and semantics of the language.

Regarding (IC) and the structure of , several suggestions of Pietroski ([8]) appear to be a promising starting point for extending our methods to the analysis of natural language. In particular, Pietroski’s analysis of how meanings compose in natural language suggests that many instances of compositionality may be reducible to logical conjunction.

Future works on these topics, therefore has several areas to explore. First we can of course seek to prove more theorems about both (SST) and (IC), as we have done here. More generally, there are a variety of suggestions in the literature on compositionality, which make explicit claims about how the meanings of complex expressions of a language are related to the meanings their component parts. Many of these claims about compositionality are likely to entail specific combinatorial properties about their languages, which may then be identified using formal methods similar to those which we have employed in this paper. Finally, one might hope to obtain a deeper understanding of the relations between synonymy, intension, and compositionality by using these methods. These are rather distinct concepts, and yet they all seem to exhibit specific combinatorial properties. By identifying and relating the various combinatorial properties associated with these linguistic phenomenon, we may hope to thereby identify important relationships between synonymy, intension, and compositionality, and other related linguistic concepts.

References

  • [1] Frege, Gottlob [1884], (1980). The Foundations of Arithmetic.
  • [2] Jacobson, Pauline; Barker, Chris (2007). Direct Compositionality.
  • [3] Janssen, Theo M. V. (2001). ”Frege, Contextuality, and Compositionality”. Journal of Logic, Language, and Information 10 (1). Accessed via: https://www.jstor.org/stable/40180264
  • [4] Kracht, Marcus (2011). Interpreted Languages and Compositionality.
  • [5] Kripke, Saul (1979). ”A Puzzle About Belief.” Accessed via:
    http://www.uvm.edu/ lderosse/courses/lang/Kripke(1979).pdf
  • [6] Montague, Richard (1973). ”The Proper Treatment of Quantification in Ordinary English.” Accessed via:
    https://doi.org/http://www.cs.rhul.ac.uk/ zhaohui/montague73.pdf
  • [7] Partee, Barbara Hall (1970). ”Opacity, coreference, and pronouns.” Synthese 21 (3-4). 359 - 385. Accessed via:
    https://www.jstor.org/stable/20114733
  • [8] Pietroski, Paul (2018). Conjoining Meanings: Semantics Without Truth Values.
  • [9] Russell, Bertrand (1905). ”On Denoting.” Reproduced in Problems in the Philosophy of Language (1969). Olshewsky, Thomas M.
  • [10] Szabo, Zoltan Gendler (2020). ”Compositionality”. The Stanford Encyclopedia of Philosophy”. Accessed via: https://plato.stanford.edu/entries/compositionality/
  • [11] Quine, Willard van Orman (1953). From a Logical Point of View: Nine Logico-Philosophical Essays.