Log In Sign Up

Homogenousness and Specificity

by   Karl Schlechta, et al.

We interpret homogenousness as a second order property and base it on the same principle as nonmonotonic logic: there might be a small set of exceptions.


page 1

page 2

page 3

page 4


The Pseudofinite Monadic Second Order Theory of Linear Order

Monadic second order logic is the expansion of first order logic by quan...

Recursive axiomatizations from separation properties

We define a fragment of monadic infinitary second-order logic correspond...

A state vector algebra for algorithmic implementation of second-order logic

We present a mathematical framework for mapping second-order logic relat...

Logic characterisation of p/q-recognisable sets

Let p/q be a rational number. Numeration in base p/q is defined by a fun...

A Monotone, Second Order Accurate Scheme for Curvature Motion

We present a second order accurate in time numerical scheme for curve sh...

Model Theory of Monadic Predicate Logic with the Infinity Quantifier

This paper establishes model-theoretic properties of FOE^∞, a variation ...

Simple explanation of Landauer's bound and its ineffectiveness for multivalued logic

We discuss, using recent results on the Landauer's bound in multivalued ...

1 Introduction

1.1 Homogenousness as a default meta-rule

Homogeneousness was discussed as an important - though rarely explicitly addressed - concept by the author in [Sch97-2], section 1.3.11, page 32, and treated in more detail in [GS16], chapter 11, see also [Sch18b], section 5.7.

It is a second order hypothesis about the world, more precisely about the adequacy of our concepts analysing the world, and discussed in an informal way in [GS16] and [Sch18b].

The aim of these notes is to make the discussion more formal, treating it as a second order application of the fundamental concept of nonmonotonicity - that the set of exceptions is small - and, in particular, to base the intuitively very appealing idea of specificity - a way of solving conflicts between contradictory homogenousness requirements - on that same fundamental concept.

The author recently discovered (reading [SEP13], section 4.3) that J. M. Keynes’s Principle of the Limitation of Independent Variety, see [Key21] expresses essentially the same idea as homogenousness. (It seems, however, that the epistemological aspect, the naturalness of our concepts, is missing in his work.) By the way, [SEP13] also mentions “inference pressure” (in section 3.5.1) discussed in [Sch97-2], section 1.3.4, page 10. Thus, the ideas are quite interwoven.

Our main formal contribution here is to analyse a size relation (or between sets, generated by a relation between elements - similarly to Definition 2.6 and Fact 2.7 in [Sch97-2].

We use these ideas to take a new look at defeasible inheritance systems in Section 7 (page 7), and analyse two fundamental decisions

  1. Upward vs. downward chaining

  2. Extensions vs. direct scepticism

Moreoever we outline principles for a formal semantics based on our ideas.

1.2 A general comment

The reader will see that we treat here again semantics based on the notions of distance and size. These notions seem very natural, perhaps also because they have a neurological correspondence: semantically close neurons or groups of neurons tend to fire together, and a large number of neurons has a potentially bigger effect than a small number, as their effect on other neurons might add up.

In an appendix, we discuss a different, we think important, concept, the core of a set, base it on distance, and find it by repeated application of standard theory revision.

2 Filters and Ideals

Definition 2.1


  1. is called a filter on iff



    (3) (finite intersection suffices here)

  2. If there is such that we say that is the (principal) filter generated by

  3. is called an ideal on iff



    (3) (finite union suffices here)

Definition 2.2

Let a filter over then

is the corresponding ideal (and

Given and the corresponding we set

The intuition is that elements of the filter are big subsets, of the ideal small subsets, and subsets in have medium size.

When speaking about over the same set we will always assume that they correspond to each other as just defined.

Remark 2.1

but not necessarily the converse.


and so

For the converse: Consider then but

Definition 2.3

Given (and corresponding and we define:

  1. (a) and


    (b) and

  2. If we write and instead of and

    Note that case of the definition is impossible if By Remark 2.1 (page 2.1), if then

Obviously, and are irreflexive.

We define two coherence properties:

Definition 2.4



These properties will be discussed in more detail below in Section 3 (page 3), as they are closely related to properties of a preferential relation between elements of see [Sch18a].

First, an initial remark: if (Coh1) and (Coh2) hold, and behave well:

Fact 2.2


(1) Let then

(2) Let then the following four conditions are equivalent:



”: so by (Coh1), so

”: so and so Moreover and so by (Coh2).


By Remark 2.1 (page 2.1), it suffices to show etc.

We use the finite union and downward closure properties of without mentioning.

We also use the following without further mentioning:




(d) by (Coh1)

We now show the equivalences.

(2.1) :

so by (Coh2) by (Coh1).

(2.2) :

by (Coh1). so by (Coh2), but

Note that we did not use is just an arbitrary set.

(2.3) :

Let by (2.1) so by (2.2)

(2.4) :

Trivial by (Coh1).

(2.5) :


Note that we did not use is just an arbitrary set.

(2.6) :

Let so by (2.4), so by (2.5).

Definition 2.5

Let a binary relation on we define for

We assume in the sequel that for any such and

Fact 2.3

Let the filter over generated by then the corresponding and and

When we discuss on and for subsets of we implicitly mean the filters, ideals, etc. generated by on subsets of as discussed in Fact 2.3 (page 2.3).

It is now easy to give examples:

Example 2.1

is neither upward nor downward absolute. Intuitively, in a bigger set, formerly big sets might become small, conversely, in a smaller set, formerly small sets might become big.

Let Then

(1) does not imply

(2) does not imply

(1): Let Then but both

(2): Let but NOT Then but both

We discuss now properties of and and their relation to properties of when are generated by as in Fact 2.3 (page 2.3).

3 on U and on

Definition 3.1

We define the following standard properties for

(1) Transitivity (trivial)

(2) Smoothness

If there there is

(3) Rankedness

If neither nor and then also

(Rankedness implies transitivity.)

See, e.g. Chapter 1 in [Sch18a].

3.1 Simple and smooth


Definition 3.2

Again, see, e.g. Chapter 1 in [Sch18a].

Fact 3.1

(1) (Coh1) is equivalent to the basic property of preferential structures,

(2) The basic property of smooth preferential structures, implies (Coh2), and imply


As there is a biggest we can argue with elements.

(1) (Coh1):

suppose contradiction.

(2) (Coh2): Let so

Let and so so follows from (Coh1)

Example 3.1

(1) Consider but not with Then but Non-transitivity of is crucial here.

(2) Consider with and close under transitivity. Then but Note that this structure is not smooth, but transitive.

Fact 3.2

is transitive, if is smooth.


By Fact 3.1 (page 3.1), we may use (Coh1) and (Coh2).

Let so and We have to show i.e.

Consider then by By the same argument, thus As and by (Coh2), and by Remark 2.1 (page 2.1)

3.2 Ranked

Rankedness speaks about so it is not surprising that behaves well for ranked

Definition 3.3

We define

This is well-defined.

Fact 3.3



Then iff

(a) and or

(b) and



(a) or

(b) and

(Recall that case (2) (b) in Definition 2.3 (page 2.3) is impossible, if




and or

and or



The case is immediate.

Example 3.2

Here, is transitive and smooth, but not ranked, and is not transitive.





So have same size in

Fact 3.4

Let the relation be ranked. Then is transitive.


Let If both and hold by case (2) (b) in Fact 3.3 (page 3.3), then again by case (2) (b), otherwise by case (2) (a).

Fact 3.5

Let then (and for

(This does not hold for of course.)


By but so

4 Specificity and Differentiation of Size

We now base the specificity criterion on the same notion of size as nonmonotonicity.

In this section, and are the positive or negative arrows of defeasible inheritance diagrams.

Fact 4.1

Suppose (Coh1) holds. If or and for some then

(Likewise, if only the contradiction matters.)


It suffices to show as then by Remark 2.1 (page 2.1).

If or then moreoever so by the finite intersection property. Thus by (Coh1). by so by (Coh1), thus by the finite union property.

Corollary 4.2

If then and together are impossible by irreflexivity.

We may now base the specificity principle, like nonmonotonicity itself, on small exception sets, but this time, the exceptions are second order. We thus have a uniform background principle for reasoning.

4.1 Summary

It seems best to illustrate the situation with an example.

Consider the Tweety Diagram: If we treat like we violate a comparatively smaller subset: As and is a comparatively smaller subset of than of and smaller exception sets are more tolerable than bigger exception sets.

Thus, size is a very strong concept for the foundation of nonmonotonic reasoning.

(In addition, otherwise, the chain has two changes: but this way, we have only one change:

4.2 Differentiation of Size

Our basic approach has only three sizes: small, medium, big.

Using above ideas, we may further differentiate: if and then is doubly small in etc.

5 Homogenousness

We have now the following levels of reasoning:

  1. Classical logic:

    monotony, no exceptions, clear semantics

  2. Preferential logic:

    small sets of exceptions possible, clear semantics, strict rules about exceptions, like no other restrictions

  3. Meta-Default rules (Homogenousness):

    They have the form: and even if in the nonmonotonic sense of (2), we prefer those models where but exceptions are possible by nonmonotonicity itself, as, e.g., in (2).

    We minimize those exceptions, and resolve conflicts whenever possible, as in Fact 4.1 (page 4.1), by using the same principle as in level (2): we keep exception sets small. This is summarized in the specificity criterion.

    (We might add a modified length of path criterion as follows: Let We know by Fact 4.1 (page 4.1) that then any shorter chain has a shorter possible size reduction (if there are no other chains, of course!), and we can work with this. This is the same concept as in [Sch18e], section 4.)

    This has again a clear (preferential) semantics, as our characterisations are abstract, see e.g. [Sch18a].

Remark: Inheritance diagrams and Reiter defaults are based on homogenousness, e.g. in the concatenation by default.

6 Extensions

All distance based semantics, like theory revision, counterfactuals, have a natural notion of size: the set of closest elements is the smallest set in the filter. Thus, we can apply our techniques to them, too.

Analogical reasoning and induction also operate with distance (analogical reasoning) and size (comparison of the sample size with the target size), so we may apply our principles here, too.

7 Defeasible Inheritance Diagrams

We discuss two of the main questions about defeasible inheritance diagrams in the light of our above analysis.

  1. Upward versus downward chaining

  2. Extension based versus directly sceptical approaches

7.1 Upward versus downward chaining

Diagram 7.1

\begin{picture}(130.0,100.0)\par\par\put(0.0,95.0){{\rm\bf The problem of % downward chaining}} \par\put(43.0,27.0){\vector(1,1){24.0}} \put(37.0,27.0){\vector(-1,1){24.0}} \put(13.0,57.0){\vector(1,1){24.0}} \put(67.0,57.0){\vector(-1,1){24.0}} \par\put(53.0,67.0){\line(1,1){4.0}} \par\put(67.0,54.0){\vector(-1,0){54.0}} \par\put(40.0,7.0){\vector(0,1){14.0}} \put(43.0,7.0){\line(3,5){24.0}} \put(58.0,28.1){\line(-5,3){3.6}} \par\put(39.0,3.0){$Z$} \put(39.0,23.0){$U$} \put(9.0,53.0){$V$} \put(69.0,53.0){$X$} \put(39.0,83.0){$Y$} \par\par\end{picture}

The problem of downward chaining

Discussion of Diagram 7.1 (page 7.1):

We assume (Coh1) and (Coh2).

By Fact 4.1 (page 4.1), we know that and by Fact 2.2 (page 2.2) (2), we know that for any etc.

By specificity, there is with

Of course, only (a big subset of) is affected by as we do only downward reasoning, no analogical (sideways) reasoning, or so.

There is with thus, is not affected by

But there is no information that

So there is and inherits from that (or, better: there is

So, in this example, our (downward) approach coincides with upward chaining, see [Sch97-2], section 6.1.3. Basically, the reason is that we look inside at and not only globally at which would involve (hidden) analogous reasoning.

7.2 Extension based versus directly sceptical approaches

Consider Diagram 7.2 (page 7.2).

Extension based approaches branch into different extensions when a conflict cannot be solved by specificity.

Each extension violates the homogenousness assumption rather drastically. Assuming that half of (a medium size subset) is in the other half in is a less drastic violation, thus it corresponds to the overall strategy to minimize violation of homogenousness.

But, there is no prinpal difference between two medium size subsets and one big and one small subset which are in conflict. So they should be treated the same way. Thus, in one single picture we have not only conflicting big and small subsets, but also conflicting medium size subsets, so this is more in the directly sceptical colour - without saying it is strictly the same as the traditional directly sceptical approach.

Thus, we have with and

Of course, any is NOT affected by

Thus, considering Diagram 7.3 (page 7.3), there is and this inherits only from i.e. that it is mostly in

Diagram 7.2
\begin{picture}(130.0,100.0)\par\par\put(0.0,95.0){{\rm\bf The Nixon Diamond}} \par\put(43.0,27.0){\vector(1,1){24.0}} \put(37.0,27.0){\vector(-1,1){24.0}} \put(13.0,57.0){\vector(1,1){24.0}} \put(67.0,57.0){\vector(-1,1){24.0}} \par\put(53.0,67.0){\line(1,1){4.0}} \par\par\par\put(39.0,23.0){$U$} \put(9.0,53.0){$V$} \put(69.0,53.0){$X$} \put(39.0,83.0){$Y$} \par\par\end{picture}

The Nixon Diamond

Diagram 7.3
\begin{picture}(130.0,100.0)\par\par\put(0.0,95.0){{\rm\bf Extended Nixon % Diamond}} \par\put(43.0,27.0){\vector(1,1){24.0}} \put(37.0,27.0){\vector(-1,1){24.0}} \put(13.0,57.0){\vector(1,1){24.0}} \put(67.0,57.0){\vector(-1,1){24.0}} \par\put(53.0,67.0){\line(1,1){4.0}} \par\par\put(40.0,7.0){\vector(0,1){14.0}} \put(43.0,7.0){\line(3,5){24.0}} \put(58.0,28.1){\line(-5,3){3.6}} \par\put(39.0,3.0){$Z$} \put(39.0,23.0){$U$} \put(9.0,53.0){$V$} \put(69.0,53.0){$X$} \put(39.0,83.0){$Y$} \par\par\end{picture}

Extended Nixon Diamond

Example 7.1

Let be the bottom node of two Nixon diamonds, e.g. add to Diagram 7.2 (page 7.2) three nodes with

Then we have to split into two sets with (basically) and two sets with and, as they are independent, we split into four sets, all in with e.g. but but and

Note that it unimportant in which order we treat the conflicts or if we treat them simultanously - as should be the case.

7.3 Ideas for a Semantics

If we want to treat the Nixon Diamond, we have to consider So far, we have not considered abstract coherence properties for We do this now.

Definition 7.1

This is a condition for ranked preferential structures.

We re-write this:

(The case does not interest here very much.)

Illustration of (the main part of)

Suppose we add to Diagram 7.2 (page 7.2) an arrow then we know that and so by

If, in addition, we add and and a negative arrow (not as in Diagram 7.3 (page 7.3)), is mostly in with and we may still conclude by the above that inherits to be (mostly) in from

7.3.1 Principles

We work with very few background principles:

  1. For many purposes, reasoning with abstract size seems the adequate approach.

  2. As always in nonmonotonic reasoning, small sets of (first-level) exceptions are possible, so we work with and instead of the full or empty set (we used above as an abbreviation).

  3. The hard rules of the background logic and of the filter/ideal properties tell us how to treat big/small/medium subsets on the first level.

  4. This is complemented by the homogenousness principle and conflict resolution by specificity on the second level.

  5. We treat all subsets the same way, not medium size sets differently by branching into different possibilities.

  6. Specificity is based on the same idea as nonmonotonicity itself: we tolerate (better) small exception sets (than bigger ones).

Based on these principles, we proceed as follows:

  1. We decide for a background logic, i.e. for coherence conditions. (Coh1), (Coh2), seems a good choice.

  2. We respect the coherence conditions, and inherit properties strictly downward, not by analogy. Contradictions are either solved by specificity, or we chose one half for property the other for

    Independence is respected as in Example 7.1 (page 7.1).

    It is important to chose the (sub)sets from which we inherit carefully, there is no analogical reasoning here. This was illustrated in above examples.

8 Appendix - the Core of a Set

The following remarks are only abstractly related to the main part of these notes. The concept of a core is a derivative concept to the notion of a distance, and the formal approach is based on theory revision, see e.g. [LMS01], or [Sch18b], section 4.3.

We define the core of a set as the subset of those elements which are “sufficiently” far away from elements which are NOT in the set. Thus, even if we move a bit, we still remain in the set.

This has interesting applications. E.g., in legal reasoning, a witness may not be very sure about colour and make of a car, but if he errs in one aspect, this may not be so important, as long as the other aspect is correct. We may also use the idea for a differentiation of truth values, where a theory may be “more true” in the core of its models than in the periphery, etc.

In the following, we have a set and a distance between elements of All sets etc. will be subsets of will be finite, the intuition is that is the set of models of a propositional language.

Definition 8.1




Definition 8.2

Fix some the core will be relative to One might write but this is not important here, where the discussion is conceptual.


(We might add some constant like 1/2 for so singletons have a non-empty core - but this is not important for the conceptual discussion.)

It does not seem to be easy to describe the core operator with rules e.g. about set union, intersection, etc. It might be easier to work with pairs (X, but we did not pursue this.

We may, however, base the notion of core on repeated application of the theory revision operator (for formulas) or (for sets) as follows:

Given (defined by some formula and (defined by the outer elements of (those of depth 1) are The elements of depth 2 are M( ) respectively, etc.

We make this formal.

Fact 8.1

  1. The set version

    Consider we want to find its core.





    Continue etc. until it becomes constant, say

    Now we go back:

  2. The formula version

    Consider we want to find its core.





    Continue etc. until it becomes constant, say

    Now we go back:

8.1 Acknowledgement

The author would like to thank David Makinson for an important comment.


  • [1]
  • [GS16] D. Gabbay, K. Schlechta, “A New Perspective on Nonmonotonic Logics”, Springer, Heidelberg, Nov. 2016, ISBN 978-3-319-46815-0,
  • [Key21]

    J. M. Keynes, “A treatise on probability”, London, 1921

  • [LMS01] D. Lehmann, M. Magidor, K. Schlechta, “Distance semantics for belief revision”, Journal of Symbolic Logic, Vol. 66, No. 1, pp. 295–317, March 2001
  • [SEP13] “Analogy and analogical reasoning”, Fall 13 edition, Stanford Encyclopedia of Philosophy, 2013
  • [Sch18a] K. Schlechta, “Formal Methods for Nonmonotonic and Related Logics”, Vol. 1: “Preference and Size” Springer, 2018
  • [Sch18b] K. Schlechta, “Formal Methods for Nonmonotonic and Related Logics”, Vol. 2: “Theory Revision, Inheritance, and Various Abstract Properties” Springer, 2018
  • [Sch18e] K. Schlechta, “Operations on partial orders”, Unpublished manuscript, arXiv 1809.10620
  • [Sch97-2] K. Schlechta, “Nonmonotonic logics - basic concepts, results, and techniques” Springer Lecture Notes series, LNAI 1187, Jan. 1997.