Fixed points and concepts related to them, such as induction and coinduction, have applications in numerous scientific fields. These mathematical concepts have applications also in social sciences, and even in common culture, usually under disguise.
These precise formal mathematical concepts are in fact quite common in everyday life to the extent that in many activities or situations (or “systems”) a mapping and a set of its fixed points can be easily discovered when some little mental effort is exercised. In particular, in any situation or system where there is some informal notion of repetition or iteration or continual “come-and-go” or “give-and-take” together with some informal notion of reaching a stable condition (e.g., equilibrium in chemical reactions or “common ground” between spouses) it is usually the case that an underlying mapping, i.e., a mathematical function/map, together with a set of fixed points of that mapping, can be easily revealed. This mapping is sometimes called a ‘state transition function’, and a fixed point of the mapping is what is usually called a ‘stable condition’ or a ‘steady state’ (of the “system”).
These concepts also show up thinly-disguised in visual art and music . However, applications of these mathematical concepts in scientific fields, in particular, are plenty and more explicit, including ones in economics and econometrics , in mathematical physics , in computer science, and in many other areas of mathematics itself. Their applications in computer science, in particular, include ones in programming language semantics—which we touch upon in this article—as well as in relational database theory (e.g., recursive or iterated joins) and in concurrency theory.111For more details on the use of induction, coinduction, and fixed points in computer science, and for concrete examples of how they are used, the reader is invited to check relevant literature on the topic, e.g., [8, 15, 3, 17, 12].
Fixed points, induction and coinduction have been formulated and studied in various subfields of mathematics, usually using different vocabulary in each field. The interested reader is invited to check our comparative survey in . In this article, for pedagogic purposes, we illustrate the concepts that we presented formally in , and we attempt to develop intuitions for these concepts using examples from number theory, set theory and real analysis.
2 Induction and Coinduction Instances
2.1 Induction Instances
An instance of the set-theoretic induction principle (presented in 3 of ) is the standard mathematical induction principle. In this well-known instance, induction is setup as follows: is the “successor” function (of Peano)222Namely, , where, e.g., ., (the smallest fixed point of the successor function ) is the set of natural numbers ,333In fact defines the set as the set of all finite non-negative whole numbers. Its existence (as the smallest infinite inductive/successor set) is an immediate consequence of the infinity axiom (and the subset and extensionality axioms) of set theory (see [6, Ch. 4] and [9, 11]). and is any inductive property/set of numbers.
An example of an inductive set is the set of natural numbers defined as
(Equivalently, viewing as a property, its definition expresses the property that each of its members, say , has its exponential strictly greater than itself). Set/property is an inductive one since —i.e., —can be proven inductively by proving that is -closed, which is sometimes, equivalently, stated as proving that preserves property or that property is preserved by . The -closedness (or ‘-preservation’) of can be proven by proving that (i.e., that ). Using the definition of the successor function , this means proving that , or, in other words, proving that (i.e., that the base case, , holds) and that from one can conclude that (i.e., that the inductive case, , holds). This last form is the form of proof-by-induction presented in most standard discrete mathematics textbooks.
Another instance of the induction principle is lexicographic induction, defined on lexicographically linearly-ordered (i.e.
, “dictionary ordered”) pairs of elements[15, 16]. In 4.1 of  (and 2.1 of ) we present a type-theoretic formulation of the induction principle that is closely related to the set-theoretic one. The type-theoretic formulation is the basis for yet a third instance of the induction principle—called structural induction—that is extensively used in programming semantics and automated theorem proving (ATP), including reasoning about and proving properties of (functional) software.
2.2 Coinduction Instances
3 Induction and Coinduction Intuitions
To develop an intuition for coinduction, let’s consider the intuition behind induction first. While discussing the intuition behind induction and the goal behind defining sets inductively, Enderton [5, p.22] states the following (emphasis added)
“We may want to construct a certain subset of a set by starting with some initial elements of , and applying certain [generating] operations to them over and over again. The [inductive] set we seek will be the smallest set containing the initial elements and closed under the operations. Its members will be those elements of which can be built up from the initial elements by applying the operations a finite number of times.
The idea is that we are given certain bricks to work with [i.e., the initial elements], and certain types of mortar [i.e., the generators], and we want [the inductive set] to contain just the things we are able to build [with a finite amount of bricks and mortar].”
Using this simple intuition we develop a similar intuition for coinduction. Like for induction, we want to construct a certain subset of a set that includes some initial elements of , and applying certain generating operations to them over and over again. The coinductive set we seek is now the largest set containing the initial elements and consistent under the operations. As such, its members will be those elements of which can be built up from the initial elements by applying the operations a finite or infinite number of times.
The idea, as for induction, is that we are given certain bricks to work with (i.e., the initial elements) and certain types of mortar (i.e., the generators), and we want the coinductive set to contain all things we are able to fathom building (with a finite or even infinite amount of bricks and mortar, but, for consistency, using nothing else, i.e., without using some other kind of building material such as glass or wood).
To illustrate this intuition, a couple of points are worthy of mention:
First, while being ‘closed under generating operations’ seems intuitive, being ‘consistent under generating operations’ may initially seem to be unintuitive, and thus be a main hinderance in having an intuitive understanding of coinduction and coinductive sets.
Second, inductiveness of a set necessitates that elements belonging to the set result from applying the generating operations (i.e., generators) only a finite number of times. This finiteness condition/restriction is a consequence of induction seeking the smallest subset of that can be constructed using the generators. As such, to define a set inductively induction works from below (starts small) and gets bigger using the generators (which sounds intuitive). Coinduction, on the other hand, allows, but does not necessitate, applying the generating operations an infinite number of times. This freedom is part of coinduction seeking the largest subset of that can conceivably be constructed using the generators. As such, to define a set coinductively, coinduction works from above (starts big) and gets smaller (removing invalid constructions) using the “generating” operations—which initially may seem unintuitive (in fact, as explained precisely below, we will see that the complement of a coinductive set is a closely-related inductive set, and vice versa).
The best intuition regarding coinductive sets, however, seems to come from the fact Kozen and Silva state in [12, p.6] that relates coinductive sets to inductive sets and to set complementation (which, supposedly, are two well-understood concepts):
“It is a well-known fact that a [set ] is coinductively defined as the greatest fixpoint of some monotone operator  iff its complement is inductively defined as the least fixpoint of the dual operator; expressed in the language of -calculus,
(Note the triple use of complementation. The fact Kozen and Silva state sounds to be indeed a true and intuitive one. Unfortunately, however, in  Kozen and Silva only stated this fact in a footnote, merely as a ‘well-known fact,’ without them citing any references to corroborate the fact).
Trying to understand the intuition behind this well-known fact, which gives a simple but indirect definition of a coinductive set in terms of an inductive one, gives an intuitive understanding of coinductive sets; an understanding that depends only on the intuitive understanding of inductive sets and of (set-theoretic) complementation.
In particular, to gain an intuitive understanding of coinductive sets using this fact, we focus first on understanding the concept of the ‘dual operator’ of a monotone operator defined as
(where is the function composition operator). According to
the definition, the endofunction
is an operator on sets (i.e., on subsets of ) that, simply,
complements its input set, passes that complement to generator ,
then returns the complement of what returns as its own result
Given that is assumed monotonic, is easily proved to be monotonic too. That is, we have
because (by the contravariance of complementation444Also called antimonotonicity or antitonicity of ,
i.e., .) we have which (by
the monotonicity of ) implies that ,
which implies (again, by complementation contravariance) that
as required. As such, like , is also a sets-generator.
Now, as a sets generator has a least fixed point, called , which is an inductively defined set. The complement of set (i.e., the set ) is the coinductively defined set we seek, i.e., is the the greatest fixpoint of (i.e., ). As such, eliding the composition operator (as is customary), we have
which, we believe, expresses precisely and concisely the intuition behind coinduction and coinductive sets.
Noting that complementing a finite subset of an infinite set produces a cofinite set (‘the complement of a finite set’), we can intuitively see that when starting with a finite set (the initial “building bricks,” using Enderton’s terminology) then applying complementation thrice (as is done in the definition of ) defines some special sort of an infinite set (particularly, some special sort of a cofinite set). This special sort of infinite sets constitutes coinductive sets.
For example, given the Peano generator that we discussed earlier, we have
As such, if is the set of positive integers, is the set of even natural numbers, and
is the set of odd natural numbers, then (as readers are invited to ascertain for themselves) we have
which intuitively justifies stating that .555Note that for all , never contains 0 (i.e., the initial “bricks”), and also that and are proper subsets of . Nevertheless, is monotonic—which initially appears to defy intuition, as is usual when reasoning about infinite sets.
As is obvious from , the set is the least fixed point of , and thus . Accordingly, we have
This means that, for this particular (with the chosen domain ), we have (i.e., the least fixed point and the greatest fixed point of agree, and has only one fixed point).
To further strengthen our intuitions, let’s consider yet a third intuition about coinductive sets—the one offered in [7, p.61], which states that
“[An element] will belong to a coinductive [set] as long as there is not a good finite reason for it not to.”
which is also expressed, a little less-precisely, in [12, p.2] as
“A property [e.g., an element belonging to a set] holds by induction if there is good reason [e.g., a finite construction of the element] for it to hold; whereas a property holds by coinduction if there is no good reason [e.g., a finite construction] for it not to hold.”
Connecting these two similar intuitions with the prior intuition (i.e., the one based on Kozen and Silva’s well-known fact), we can see that the role of the dual operator (i.e., in defining ) is to define (inductively) those elements for which there is “a good finite reason” to not belong to (we call these the rejected elements). defines these rejected elements by precomposing with negation (set complementation) then postcomposing the resulting operator with negation again. The role of , the least fixed set of , is then to collect all these rejected elements (all of which have a finite “bad” reason—i.e., inconsistency with —to belong, or, equivalently, a finite good reason not to belong), and nothing but them (since is a minimal inductive set), in one set. Then, finally, the role of the third (i.e., the outermost) negation in the definition of is to exclude those rejected elements from belonging to (the greatest fixed set of ).
Combining the intuitive understanding of inductive and coinductive sets, it is now easy to observe that a sets-generator defined over (subsets of) an infinite universe defines three disjoint (and possibly empty) subsets of :
A set containing the “good and finite” elements of (relative to ) that “break no rules” (i.e., that can be constructed by , and are thus consistent with it), and that can be constructed finitely (in a finite number of steps/applications of ),
A set (note that always holds) containing the “in-between, good but infinite” elements of (relative to ) that also break no rules, (i.e., can be constructed by and are thus consistent with it), but that cannot be finitely constructed by it (and thus do not belong to but only to ), and
A set containing the “bad” elements of (relative to ) that “break the rules” (of ) and thus cannot be finitely or even infinitely constructed by (i.e., are inconsistent with ) and thus are not included even in (these are elements of that, whether finite or infinite elements, cannot be constructed by , whether is applied a finite or infinite number of times, and are thus “totally inconsistent” with ).
This observation can be summarized as saying that relative to a generator elements of set are either: finitely-consistent with , infinitely-consistent with , or (finitely or infinitely) inconsistent with .
An example of dual operators are the logical operators (forall) and (there exists) in first-order logic. Note that and . Same applies for and (in set theory), and for and (in propositional logic). The statements that these pairs of operators are dual operators are usually called ‘De Morgan’s Laws.’
Note that ‘complementation as negation’ makes sense in set theory (and, accordingly, also in structural type theory and in first-order logic. See 4.1 and 5 of ), thus making coinductive sets (and coinductive structural data types and coinductive first-order logical statements) relatively simple to understand, i.e., are understandable based on the intuitive understanding of inductive sets. A counterpart of set-theoretic complementation may not, however, be easily definable as the meaning of negation () in other mathematical fields . This makes coinductive objects in these fields a bit harder to reason about. (We intend to deliberate on this point more in future versions of this article. But see also 3.2 of  for a discussion of type negation in the context of OO/nominal type theory).
Another less-obvious instance of a coinductive set is the standard subtyping relation in (nominally-typed) object-oriented programming languages. See the notes of 2.2 of  for more details.
4 Illustrating Induction and Coinduction
In this section we strengthen our intuitive understanding of pre-fixed points, post-fixed points, and fixed points by illustrating these concepts in set theory, and also by presenting a pair of examples (and exercises) from analysis and number theory—examples that, as such, are most likely familiar to many readers—that exemplify these concepts.
4.1 In Set Theory
Figure 1 visually illustrates the main concepts we discuss in this article in the context of set theory, by particularly presenting the pre-/post-/fixed points of some abstract generator over the powerset (a complete lattice) for some abstract set .
Figure 1 illustrates that subsets , and of are among the -closed/-large/-pre-fixed subsets of (i.e., the upper diamond, approximately), while subsets , and are among its -consistent/-small/-post-fixed subsets (i.e., the lower diamond, approximately). Of the sets illustrated in the diagram, only sets and are -fixed subsets of , i.e., belong to the intersection of -pre-fixed and -post-fixed subsets (i.e., the inner diamond, exactly). It should be noted that some other subsets of (i.e., points/elements of ) may neither be among the pre-fixed subsets of nor be among the post-fixed subsets of (and thus are not among the fixed subsets of too). Such subsets are not illustrated in Figure 1. (If drawn, such subsets would lie, mostly, outside all three diamonds illustrated in Figure 1.)
Figure 1 also illustrates that is true for any set and any generator . However, depending on the particular , it may (or may not) be the case that (e.g., if is a constant function or if happens to have only one fixed point).666In the context of category theory, the symbol is usually interpreted as denoting an equivalence or isomorphism relation between objects, rather than denoting the equality relation. See 6 of .
With little alterations (such as changing the labels of its objects, the direction of its arrows, and the symbols on its arrows) the diagram in Figure 1 can be used to illustrate these same concepts in the context of order theory, type theory, first-order logic or category theory. (Exercise: Do that.)
4.2 In Analysis
Most readers will definitely have met (and have been intrigued by?) fixed points in their high-school days, particularly during their study of analysis (the subfield of mathematics concerned with the study of real numbers and with mathematical concepts related to them). To further motivate the imagination and intuitions of readers, we invite them to consider the function defined as
and graphed in
Figure 2. In particular, we invite the reader
to decide: (1) which points of the totally-ordered real line
(the x-axis) are pre-fixed points of , (2) which points/real
numbers are post-fixed points of and, (3) most easily, which
points are fixed points of .777To decide these sets of points using Figure 2,
readers should particularly take care: (1) not to confuse the fixed
points of (i.e., crossings of the graph of with the
graph of the identity function, which are solutions of the equation
) with the zeroes of (i.e.,
crossings with the -axis, which are solutions of the equation
), and (2) not to confuse pre-fixed points of
for its post-fixed points and vice versa.
Since is not a monotonic function (i.e., is not a generator) we compensate by giving some visual hints in Figure 2 so as to make the job of readers easier. The observant reader will recall that diagrams such as that in Figure 2 are often used (e.g., in high-school/college math textbooks), with “spirals” that are zeroing in on fixed points overlaid on the diagrams, to explain iterative methods—such as the renowned Newton-Raphson method—that are used in numerical analysis to compute numerical derivatives of functions in analysis. These methods can be easily seen to be seeking to compute fixed points of some related functions. As such, these numerical analysis methods for computing numerical derivatives can also be explained in terms of pre-fixed points and post-fixed points.
(Exercise: Do the same for the monotonic function defined as
and depicted in
Figure 3, and then relate your findings to Figure 1. Now also relate your findings regarding in Figure 2—or regarding monotonic/monotonically-increasing and antimonotic/monotonically-decreasing sections of —to Figure 1. Based on your relating of Figure 2 and Figure 3 to Figure 1, what do you conclude?).
The main goal of presenting these two examples from analysis is to let the readers strengthen their understanding of pre-/post-/fixed points and dispel any misunderstandings they may have about them, by allowing them to connect examples of fixed points that they are likely familiar with (i.e.
, in analysis) to the more general but probably less familiar concepts of fixed points found in order theory, set theory, and other branches of mathematics. Comparing the illustration of fixed points and related concepts in set theory (e.g., as in Figure 1) with illustrations of them using examples from analysis (e.g., as in Figure 2 and Figure 3) makes evident the fact that a function over a partial order (such as ) is not as simple to imagine or draw as doing so for a function over a total order (such as ) is. However, illustrations and examples using functions over total orders can be too specific and thus misleading, since they may hide the more generality, the wider applicability, and the greater mathematical beauty of the concepts depicted in the illustrations.
4.3 In Number Theory
For a non-visual example, consider perfect numbers in arithmetic.888This example is taken from [6, p.5]. The example, being tangential and in a different context than ours, is mentioned without reference to pre-/post-/fixed points. More on perfect numbers can be found in other standard texts on arithmetic and number theory. A positive integer is perfect if it equals the sum of its smaller divisors, e.g., 6=1+2+3. It is deficient (or abundant) if the sum of its smaller divisors is less then (or greater than, respectively) the number itself. The first four perfect numbers are 6, 28, 496, and 8128.
A watchful reader will immediately note that deficient numbers are the proper pre-fixed points (ones that are not fixed points) of a function from numbers to numbers, abundant numbers are the proper post-fixed points of the function, and perfect numbers are its fixed points. That function, namely, is the endofunction that sums the smaller divisors of its input number.
Lets call this function (for ‘sum of divisors’). If is the (totally-ordered) set of positive whole numbers, then deficient numbers are exactly the members of the set of proper pre-fixed points of , abundant numbers are exactly members of the set of proper post-fixed points of , and perfect numbers are exactly members of the set of fixed points of . (Note that is not a monotonic/covariant/increasing/monotonically-increasing function, and that is not a complete lattice. However, the notions pre-fixed points, post-fixed points, and fixed points still have relevance in such a context. See 2.2 of  for a similar situation.)
-  Moez A. AbdelGawad. Induction, coinduction, and fixed points in order theory, set theory, type theory, first-order logic, and category theory: A concise comparative survey. eprint available at http://arxiv.org/abs/1812.10026, 2018.
-  Moez A. AbdelGawad. Induction, coinduction, and fixed points in programming languages type theory. eprint available at http://arxiv.org/abs/1902.xxxxxx, 2019.
-  Yves Bertot and Pierre Casteran. Interactive Theorem Proving and Program Development Coq’Art: The Calculus of Inductive Constructions. Springer, 2004.
-  M. Brandt and F. Henglein. Coinductive axiomatization of recursive type equality and subtyping. Fundamenta Informaticae, 33(4):309–338, 1998.
-  Herbert B. Enderton. A Mathematical Introduction to Logic. Academic Press, New York, 1972.
-  Herbert B. Enderton. Elements of Set Theory. Academic Press, New York, 1977.
-  Thomas Forster. Logic, Induction, and Sets. London Mathematical Society Student Texts. Cambridge University Press, 2003.
-  John Greiner. Programming with inductive and co-inductive types. Technical Report CMU-CS-92-109, School of Computer Science, Carnegie Mellon University, Jan 1992.
-  Paul R. Halmos. Naive Set Theory. D. Van Nostrand Company, Inc., 1960.
-  Douglas R. Hofstadter. Gödel, Escher, Bach: an Eternal Golden Braid. Basic Books, second edition, 1999.
-  Laurence R. Horn. A Natural History of Negation. CSLI Publications, 2001.
-  Dexter Kozen and Alexandra Silva. Practical coinduction. Mathematical Structures in Computer Science, 27(7):1132–1152, 2016.
-  Andrew McLennan. Advanced Fixed Point Theory for Economics. Springer, 2018.
-  Roger Penrose. The Road to Reality: A Complete Guide to the Laws of the Universe. Jonathan Cape, 2004.
-  Benjamin C. Pierce. Types and Programming Languages. MIT Press, 2002.
-  Kenneth A. Ross and Charles R. B. Wright. Discrete Mathematics. Prentice Hall, third edition, 1992.
-  Davide Sangiorgi. Introduction to Bisimulation and Coinduction. 2012.