1 Introduction
Fixed points and concepts related to them, such as induction and coinduction, have applications in numerous scientific fields. These mathematical concepts have applications also in social sciences, and even in common culture, usually under disguise.
These precise formal mathematical concepts are in fact quite common in everyday life to the extent that in many activities or situations (or “systems”) a mapping and a set of its fixed points can be easily discovered when some little mental effort is exercised. In particular, in any situation or system where there is some informal notion of repetition or iteration or continual “comeandgo” or “giveandtake” together with some informal notion of reaching a stable condition (e.g., equilibrium in chemical reactions or “common ground” between spouses) it is usually the case that an underlying mapping, i.e., a mathematical function/map, together with a set of fixed points of that mapping, can be easily revealed. This mapping is sometimes called a ‘state transition function’, and a fixed point of the mapping is what is usually called a ‘stable condition’ or a ‘steady state’ (of the “system”).
These concepts also show up thinlydisguised in visual art and music [29]. However, applications of these mathematical concepts in scientific fields, in particular, are plenty and more explicit, including ones in economics and econometrics [38], in mathematical physics [40], in computer science, and in many other areas of mathematics itself. Their applications in computer science, in particular, include ones in programming language semantics—which we touch upon in this paper—as well as in relational database theory (e.g., recursive or iterated joins) and in concurrency theory.^{1}^{1}1For more details on the use of induction, coinduction, and fixed points in computer science, and for concrete examples of how they are used, the reader is invited to check relevant literature on the topic, e.g., [27, 42, 13, 46, 34].
Fixed points, induction and coinduction have been formulated and studied in various subfields of mathematics, usually using different vocabulary in each field. Fields for studying these concepts formally include order theory, set theory, programming languages (PL) type theory (i.e., the study of data types in computer programming languages), (firstorder) logic, and category theory. In this paper—which started as a brief selfnote—we compare the various formulations of these concepts by presenting a summary of how these concepts, and many related ones—such as prefixed points, postfixed points, inductive sets/types, coinductive sets/types, algebras, and coalgebras—are defined in each of these mathematical subfields.
Of particular interest to programming languages researchers, we also give attention to the fundamental conceptual difference between structural type theory, where type equality and type inclusion are defined based on type structures but not on type names, and nominal type theory, where type equality and type inclusion are defined based on the names of types in addition to their structures. Given their difference in how each particularly defines the inclusion relation between types (i.e., the subtyping relation), the conceptual difference between structural typing and nominal typing expresses itself, prominently, when fixed points and related concepts are formulated in each of structural type theory and nominal type theory.
As such this paper is structured as follows. In 2 (2 Order Theory) we start by presenting how these important mathematical concepts are formulated in a natural and simple manner in order theory, then we present their standard formulation in set theory in 3 (3 Set Theory). In 3 we also illustrate the concepts, for pedagogic purposes, using examples from number theory, set theory and real analysis. Since PL type theory builds on set theory, we next present the formulation of these concepts in the theory of types of functional programming languages (which is largely structurallytyped) in 4.1 (4.1 Inductive and Coinductive Functional Data Types), then we follow that by presenting their formulation in the type theory of objectoriented programming languages (which is largely nominallytyped) in 4.2 (4.2 ObjectOriented Type Theory). Building on intuitions gained from the formulation of the concepts we presented in 3, we then suggest in 5 (5 FirstOrder Logic) a formulation of these concepts in firstorder logic. Then we present the most general formulation of these concepts, in the context of category theory, in 6 (6 Category Theory).
In 7 (7 Comparison Summary) we summarize the paper by presenting tables that collect the formulations given in the previous sections. Based on the tabular comparison in 7 and the discussion in 4 (4 Programming Languages Theory), we also briefly discuss in 8 (8 Structural Type Theory versus Nominal Type Theory) a consequence of the fundamental difference between structural typing and nominal typing that we discussed in 4. We conclude in 9 (9 A Fundamental and More Abstract Treatment) by hinting at the possibility of a more abstract, and more unified formulation of the concepts that we discussed in this paper, using concepts from category theory, namely monads and comonads.
It should be noted that in each particular subfield we discuss in this paper we try to use what seems (to us) to be the most natural terminology in that field. Hence, in agreement with standard mathematical practice, the same mathematical concepts may have significantly different names when presented in different fields in this paper. It should be noted also that mathematicalbutnoncomputerscience readers—presumably interested mainly in comparing “pure” mathematical subdisciplines, i.e., in comparing formulations in order theory, set theory, firstorder logic, and category theory, but not in PL type theory—may safely skip, at least in a first reading, the lengthier 4 and the short 8. (Those readers may like to also ignore the rightmost column of Table 1 and the leftmost column of Table 2 in 7.) This skipped material is mainly of interest to PL theorists only.
2 Order Theory
Order theory, which includes lattice theory as a subfield, is the branch of mathematics where the concepts fixed point, prefixed point, postfixed point, least fixed point, greatest fixed point, induction, and coinduction were first formulated in some generality and the relations between them were proven [51]. Order theory seems to be the simplest and most natural setting in which these concepts can be defined and studied. We thus start this paper by presenting the ordertheoretic formulation of these concepts in this section.
Formulation
Let (‘is less than or equal to’) be an ordering relation—also called a partial order—over a set and let be an endofunction over (also called a selfmap over , i.e., a function whose domain and codomain are the same set, thus mapping a set into itself^{2}^{2}2We are focused on unary functions in this paper because we are interested in discussing fixed points and closelyrelated concepts, to which multiarity makes little difference. Note that a binary function can be transformed into an equivalent unary one via “currying” (also known in logic as exportation). By iterating it, currying can be applied to multiary functions, i.e.
, ones with (finite) arity greater than two. Currying does preserve monotonicity/variance, and currying seems to be applicable to all fields of interest to us in this paper since, in each field, objects of that field—
i.e., posets, power sets, types, etc.—and “morphisms/arrows” between these objects seem to form a closed monoidal category.).Given a point , we call point —the image of point under —the ‘image’^{3}^{3}3In this paper, nonstandard names (suggested by the author) are singlequoted like this ‘—’ when first introduced. of .
A point is called a prefixed point of if its image is less than or equal to it, i.e., if
(A prefixed point of is sometimes also called an closed point, ‘lower bounded point’, or ‘large point’.) The greatest element of , if it exists (in ), is usually denoted by , and it is a prefixed point of for all endofunctions . In fact , when it exists, is the greatest prefixed point of for all .
A point is called a postfixed point of if it is less than or equal to its image, i.e., if
(A postfixed point of is sometimes also called an consistent point, ‘(upper)’ bounded point, or ‘small point’.) The least element of , if it exists (in ), is usually denoted by , and it is a postfixed point of for all endofunctions . In fact , when it exists, is the least postfixed point of for all .
A point is called a fixed point (or ‘fixed element’) of if it is equal to its image, i.e., if
As such, a fixed point of is simultaneously a prefixed point of and a postfixed point of .
Now, if is a complete lattice over (i.e., if is an ordering relation where meets and joins of all subsets of are guaranteed to exist in ) and if, in addition, is a monotonic endofunction over , i.e., if
then is called a generating function (or generator) and a point , called the least prefixed point of , exists in , and—as was proven by Tarski [51]— is also the least fixed point (or, lfp) of , and a point , called the greatest postfixed point of , exists in , and is also the greatest fixed point (or, gfp) of .^{4}^{4}4See Table 1 for the definitions of and in order theory. Further, for any element we have:

(induction) if , then ,
which, in words, means that if is a prefixed point of (i.e., if the image of is less than or equal to it), then point is less than or equal to , and 
(coinduction) if , then ,
which, in words, means that if is a postfixed point of (i.e., if is less than or equal to its image), then point is less than or equal to point .
References
3 Set Theory
In set theory, the set of subsets of any set—i.e., the powerset of the set—is always a complete lattice under the inclusion () ordering. As such, Tarski’s result in lattice theory (see 2) was first formulated and proved in the specific context of powersets [33], which present a simple and very basic example of a context in which the ordertheoretic formulation of induction/coinduction and related concepts can be applied and demonstrated. In fact, given the unique foundational significance of set theory in mathematics—unparalleled except, arguably, by the foundational significance of category theory—the settheoretic formulation of the induction and coinduction principles are the standard formulations of these principles. The settheoretic formulation also forms the basis for very similar formulations of these concepts in (structural) type theory (see 4.1) and in firstorder logic (see 5).
Formulation
Let (‘is a subset of’) denote the inclusion ordering of set theory and let (‘is a member of’) denote the membership relation. Further, let be the partiallyordered set of all subsets of some fixed set , under the inclusion ordering (as such , where is the powerset operation, and is always a complete lattice) and let be an endofunction over .
A set (equivalently, ) is called an closed set if its image is a subset of it, i.e., if
(An closed subset is sometimes also called an lower bounded set or large set.) Set , the largest set in , is an closed set for all endofunctions —in fact is the largest closed set for all .
A set is called an consistent set if it is a subset of its image, i.e., if
(An consistent subset is sometimes also called an (upper) bounded set^{5}^{5}5From which comes the name bounded polymorphism in functional programming. (See 4.1.) , correct set, or small set.) The empty set, , the smallest set in , is an consistent set for all endofunctions —in fact is the smallest consistent set for all .
A set is called a fixed point (or ‘fixed set’) of if it is equal to its image, i.e., if
As such, a fixed point of is simultaneously an closed set and an consistent set.
Now, given that is a complete lattice, if, in addition, is a monotonic endofunction, i.e., if
then is called a setsgenerating function (or generator) and an inductivelydefined subset , the smallest closed set, exists in , and is also the smallest fixed point of , and a coinductivelydefined subset , the largest consistent set, exists in , and is also the largest fixed point of .^{6}^{6}6See Table 1 for the definitions of and in set theory. Further, for any set we have:

(induction)
(i.e., ),
which, in words, means that if (we can prove that) set is an closed set, then (by induction, we get that) set is a subset of , and 
(coinduction)
(i.e., ),
which, in words, means that if (we can prove that) set is an consistent set, then (by coinduction, we get that) is a subset of set .
Induction Instances
An instance of the settheoretic induction principle presented above is the standard mathematical induction principle. In this wellknown instance induction is setup as follows: is the “successor” function (of Peano)^{7}^{7}7Namely, , where, e.g., ., (the smallest fixed point of the successor function ) is the set of natural numbers ,^{8}^{8}8In fact defines the set as the set of all finite nonnegative whole numbers. Its existence (as the least infinite inductive set) is an axiom of set theory. and is any inductive property/set of numbers.

An example of an inductive set is the set of natural numbers defined as . Set is an inductive set since —i.e., —can be proven inductively by proving that is closed, which is sometimes, equivalently, stated as proving that preserves (property) or that is preserved by . The closedness (or ‘preservation’) of can be proven by proving that (i.e., that ). Using the definition of the successor function , this means proving that , or, in other words, proving that (i.e., that the base case, , holds) and that from one can conclude that (i.e., that the inductive case, , holds). This last form is the form of proofbyinduction presented in most standard discrete mathematics textbooks.
Another instance of the induction principle is lexicographic induction, defined on lexicographically linearlyordered (i.e.
, “dictionary ordered”) pairs of elements
[42, 45]. In 4.1 we will see a typetheoretic formulation of the induction principle that is closely related to the settheoretic one above. The typetheoretic formulation is the basis for yet a third instance of the induction principle—called structural induction—that is extensively used in programming semantics and automated theorem proving (ATP), including reasoning about and proving properties of (functional) software.Illustrations
In this section we strengthen our understanding of prefixed points, postfixed points, and fixed points by illustrating these concepts in set theory, and also by presenting a pair of examples (and exercises) from analysis—that are thus likely to be familiar to many readers—that exemplify these concepts.
In Set Theory
Figure 1 visually illustrates the main concepts we discuss in this paper in the context of set theory, by particularly presenting the pre/post/fixed points of some abstract generator over the powerset (a complete lattice) for some abstract set .
Figure 1 illustrates that subsets , and of are among the closed/large/prefixed subsets of (i.e., the upper diamond, approximately), while subsets , and are among its consistent/small/postfixed subsets (i.e., the lower diamond, approximately). Of the sets illustrated in the diagram, only sets and are fixed subsets of , i.e., belong to the intersection of prefixed and postfixed subsets (i.e., the inner diamond, exactly). It should be noted that some other subsets of (i.e., points/elements of ) may neither be among the prefixed subsets of nor be among the postfixed subsets of (and thus are not among the fixed subsets of too). Such subsets are not illustrated in Figure 1. (If drawn, such subsets would lie, mostly, outside all three diamonds illustrated in Figure 1.)
Figure 1 also illustrates that is true for any set and any generator . However, depending on the particular , it may (or may not) be the case that (e.g., if is a constant function or if happens to have only one fixed point).^{9}^{9}9In the context of category theory, the symbol is usually interpreted as denoting an equivalence or isomorphism relation between objects, rather than denoting the equality relation. See 6.
With little alterations (such as changing the labels of its objects, the direction of its arrows, and the symbols on its arrows) the diagram in Figure 1 can be used to illustrate these same concepts in the context of order theory, type theory, firstorder logic or category theory. (Exercise: Do that.)
In Analysis
Most readers will definitely have met (and have been intrigued by?) fixed points in their highschool days, particularly during their study of analysis (the subfield of mathematics concerned with the study of real numbers and with mathematical concepts related to them). To further motivate the imagination and intuitions of readers, we invite them to consider the function defined as
and graphed in
Figure 2. In particular, we invite the reader
to decide: (1) which points of the totallyordered real line
(the xaxis) are prefixed points of , (2) which points/real
numbers are postfixed points of and, (3) most easily, which
points are fixed points of .^{10}^{10}10To decide these sets of points using Figure 2,
readers should particularly take care: (1) not to confuse the fixed
points of (i.e., crossings of the graph of with the
graph of the identity function, which are solutions of the equation
) with the zeroes of (i.e.,
crossings with the axis, which are solutions of the equation
), and (2) not to confuse prefixed points of
for its postfixed points and vice versa.
Since is not a monotonic function (i.e., is not
a generator) we compensate by giving some visual hints in Figure 2
so as to make the job of readers easier. The observant reader will
recall that diagrams such as that in Figure 2
are often used (e.g., in highschool/college math textbooks),
with “spirals” that are zeroing in on fixed points overlaid on
the diagrams, to explain iterative methods—such as the renowned
NewtonRaphson method—that are used in numerical analysis to compute
numerical derivatives of functions in analysis. These methods can
be easily seen to be seeking to compute fixed points of some related
functions. As such, these numerical analysis methods for computing
numerical derivatives can also be explained in terms of prefixed
points and postfixed points.
(Exercise: Do the same for the monotonic function
defined as
and depicted in
Figure 3, and then relate your findings to Figure 1. Now also relate your findings regarding in Figure 2—or regarding monotonic/monotonicallyincreasing and antimonotic/monotonicallydecreasing sections of —to Figure 1. Based on your relating of Figure 2 and Figure 3 to Figure 1, what do you conclude?).
The main goal of presenting these two examples from analysis is to let the readers strengthen their understanding of pre/post/fixed points and dispel any misunderstandings they may have about them, by allowing them to connect examples of fixed points that they are likely familiar with (i.e.
, in analysis) to the more general but probably less familiar concepts of fixed points found in order theory, set theory, and other branches of mathematics. Comparing the illustration of fixed points and related concepts in set theory (
e.g., as in Figure 1) with illustrations of them using examples from analysis (e.g., as in Figure 2 and Figure 3) makes evident the fact that a function over a partial order (such as ) is not as simple to imagine or draw as doing so for a function over a total order (such as ) is. However, illustrations and examples using functions over total orders can be too specific and thus misleading, since they may hide the more generality, the wider applicability, and the greater mathematical beauty of the concepts depicted in the illustrations.References
4 Programming Languages Theory
Given the ‘types as sets’ view of types in programming languages, in this section we build on the settheoretic presentation in 3 to present the induction and coinduction principles using the jargon of programming languages type theory. The presentation allows us to demonstrate and discuss the influence structural and nominal typing have on the theory of type systems of functional programming languages (which mostly use structural typing) and objectoriented programming languages (which mostly use nominal typing).
4.1 Inductive and Coinductive Functional Data Types
Formulation
Building on the concepts developed in 3, let be the set of structural types in functional programming.^{11}^{11}11By construction/definition, the poset of structural types under the inclusion/structural subtyping ordering relation is always a complete lattice. This point is discussed in more detail below.
Let (‘is a subset/subtype of’) denote the structural subtyping/inclusion relation between structural data types, and let (‘has type/is a member of/has structural property’) denote the structural typing relation between structural data values and structural data types.
Now, if is a polynomial (with powers) datatype constructor^{12}^{12}12That is, is one of the , , or data type constructors (i.e., the summation/disjoint union/variant constructor, the product/record/labeled product constructor, or the continuous function/exponential/power constructor, respectively) or is a composition of these constructors (and their compositions). By their definitions in domain theory [48, 43, 28, 31, 10, 24, 17], these structural datatype constructors, and their compositions, are monotonic (also called covariant) datatype constructors (except for the first type argument of , for which is an antimonotonic/contravariant constructor, but that otherwise “behaves nicely” [37])., i.e., if
then an inductivelydefined type/set , the smallest closed set, exists in , and is also the smallest fixed point of , and a coinductivelydefined type/set , the largest consistent set, exists in , and is also the largest fixed point of .^{13}^{13}13See Table 1 for the definitions of and .
Further, for any type (where , as a structural type, expresses a structural property of data values) we have:

(structural induction, and recursion)
(i.e., ),
which, in words, means that if the (structural) property is preserved by (i.e., if is closed), then all data values of the inductive type have property (i.e., ).
Furthermore, borrowing terminology from category theory (see 6), a recursive function that maps data values of the inductive type to data values of type (i.e., having structural property ) is the unique catamorphism (also called a fold) from to (where is viewed as an initial algebra and as an algebra), and 
(structural coinduction, and corecursion)
(i.e.,
which, in words, means that if the (structural) property is reflected by (i.e., if is consistent), then all data values that have property are data values of the coinductive type (i.e., ).
Furthermore, borrowing terminology from category theory, a corecursive function that maps data values of type (i.e., having structural property ) to data values of the coinductive type is the unique anamorphism from to (where is viewed as an coalgebra and as a final coalgebra).
Notes

To guarantee the existence of and in for all type constructors , and hence to guarantee the ability to reason easily—i.e., inductively and coinductively—about functional programs, the domain of types in functional programming is deliberately constructed to be a complete lattice under the inclusion ordering. This is achieved by limiting the type constructors used in constructing and over to structural type constructors only (i.e., to the constructors , , and their compositions, in addition to basic types such as Unit, Bool, Top, Nat and Int).

For example, the inductive type of lists of integers in functional programming is defined structurally (i.e., using , , and structural induction) as
which defines the type as (isomorphic/equivalent to) the summation of type Unit (which provides the value unit as an encoding for the empty list) to the product of type Int with type itself.

In fact the three basic types Bool, Nat and Int can also be defined structurally. For example, in a functional program we may structurally define type Bool using the definition (for false and true), structurally define type Nat using the definition (for 0 and the successor of a natural number), and, out of other equallyvalid choices, structurally define type Int using the definition (for negative integers, zero, and positive integers).

References
4.2 ObjectOriented Type Theory
The accurate and precise understanding of the generic subtyping relation in mainstream OOP languages such as Java, C#, C++, Kotlin and Scala, and the proper mathematical modeling of the OO subtyping relation in these languages, is one of our main research interests. Due to the existence of features such as wildcard types, type erasure, and bounded generic classes (where classes^{14}^{14}14The notion of class in this paper includes that of an abstract class, of an interface, and of an enum in Java [25]. It also includes similar “typeconstructing” constructs in other nominallytyped OO languages, such as traits in Scala [39]. And a generic class is a class that takes a type parameter (An example is the generic interface List in Java—that models lists/sequences of items—whose type parameter specifies the type of items in a list).
play the role of type constructors), the mathematical modeling of the generic subtyping relation in mainstream OOP languages is a hard problem that, in spite of much effort, seems to still have not been resolved, at least not completely nor satisfactorily, up to the present moment
[52, 26, 5, 3, 8, 6].The majority of mainstream OO programming languages are classbased, and subtyping () is a fundamental relation in OO software development. In industrialstrength OOP, i.e., in staticallytyped classbased OO programming languages such as Java, C#, C++, Kotlin and Scala, class names are used as type names, since class names—which objects carry at runtime—are assumed to be associated with behavioral class contracts by developers of OO software. Hence, the decision of equality between types in these languages takes type names in consideration—hence, nominal typing. In agreement with the nominality of typing in these OO languages, the fundamental subtyping relation in these languages is also a nominal relation. Accordingly, subtyping decisions in the type systems of these OO languages make use of the inherentlynominal inheritance declarations (i.e., that are explicitly declared between class names) in programs written using these languages.^{15}^{15}15Type/contract inheritance that we discuss in this paper is the same thing as the inheritance of behavioral interfaces (APIs) from superclasses to their subclasses that mainstream OO software developers are familiar with. As is empirically familiar to OO developers, the subtyping relation in classbased OO programming languages is in onetoone correspondence with API (and, thus, type/contract) inheritance from superclasses to their subclasses [53]. Formally, this correspondence is due to the nominality of the subtyping relation [1].
Formulation
Let (‘is a subtype of’) denote the nominal subtyping relation between nominal data types (i.e., class types), and let (‘has type’) denote the nominal typing relation between nominal data values (i.e., objects) and nominal data types.
Further, let be the set of nominal types in objectoriented programming, ordered by the nominal subtyping relation, and let be a type constructor over (e.g., a generic class).^{16}^{16}16Unlike poset in 4.1 (of structural types under the structural subtyping relation), poset (of nominal types under the nominal subtyping relation) is not guaranteed to be a complete lattice.
A type is called an ‘supertype’ if its image is a subtype of it, i.e., if
and is said to be preserved by . (An supertype is sometimes also called an closed type, lower bounded type, or large type). The root or top of the subtyping hierarchy, if it exists (in ), is usually called Object or All, and it is an supertype for all generic classes . In fact the top type, when it exists, is the greatest supertype for all .
A type is called an ‘subtype’ if it is a subtype of its image, i.e., if
and is said to be reflected by . (An subtype is sometimes also called an consistent type, (upper) bounded type^{17}^{17}17From which comes the name bounded generics in objectoriented programming., or small type). The bottom of the subtyping hierarchy, if it exists (in ), is usually called Null or Nothing, and it is an subtype for all generic classes . In fact the bottom type, when it exists, is the least subtype for all .
A type is called a fixed point (or ‘fixed type’) of if it is equal to its image, i.e., if
As such, a fixed point of is simultaneously an supertype and an subtype. (Such fixed types/points are rare in OOP practice).
Now, if is a covariant generic class (i.e., a typesgenerator)^{18}^{18}18Generic classes in Java are in fact always monotonic/covariant, not over types but over interval types ordered by containment! (See [9]). In particular, for any generic class in Java we have
and if , the ‘least supertype’ exists in , and is also the least fixed point of , and if , the ‘greatest subtype’, exists in , and is also the greatest fixed point of ,^{19}^{19}19See Table 2 for the definitions of and in the (rare) case when happens to be a complete lattice. then, for any type we have:

(induction)
(i.e., ),
which, in words, means that if the contract (i.e., behavioral type) is preserved by (i.e., is an supertype), then the inductive type is a subtype of , and 
(coinduction)
(i.e., ),
which, in words, means that if the contract (i.e., behavioral type) is reflected by (i.e., is an subtype), then is a subtype of the coinductive type .
Notes

As discussed earlier and in , in structural type theory type expressions express only structural properties of data values, i.e., how the data values of the type are structured and constructed. In nominal type theory type names are associated with formal or informal contracts, called behavioral contracts, which express behavioral properties of the data values (e.g., objects) in addition to their structural properties.

To demonstrate, in a pure structural type system a record type that has, say, one member (e.g., type plane fly() , type bird fly() and type insect fly() ) is semantically equivalent to any other type that has the same member (i.e., type plane is equivalent to type bird and to type insect)—in other words, in a pure structural type system these types are ‘interchangeable for all purposes’.

On the other hand, in a pure nominal type system any types that have the same structure but have different names (e.g., types plane, bird and insect) are considered distinct types that are not semantically equivalent, since their different names (e.g., ‘plane’ versus ‘bird’ versus ‘insect’) imply the possibility, even likelihood, that data values of each type maintain different behavioral contracts, and thus of the likelihood of different use considerations for the types and their data values.^{20}^{20}20For another example, a float used for monetary values (e.g., in finanicial transactions) should normally not be confused with (i.e., equated to) a float used for measuring distances (e.g., in scientific applications). Declaring type money=float type distance=float does not help in a purely structural type system, however, since the types float, money, and distance are structurally equivalent. On the other hand, in a purely nominal type system the declarations of types money and distance do have the desired effect, since the nonequivalence of the types is implied by their different names.^{,}^{21}^{21}21Further, when (1) the functional components of data values are (mutually) recursive, which is typical for methods of objects in OOP [4], and when (2) data values (i.e., objects) are autognostic data values (i.e., have a notion of self/this, which is an essential feature of mainstream OOP [18])—which are two features of OOP that necessitate recursive types—then the semantic differences between nominal typing and structural typing become even more prominent, since type names and their associated contracts gain more relevance as expressions of the richer recursive behavior of the more complex data values. (For more details, see [2] and [42, 19.3].)


In industrialstrength OO programming languages (such as Java, C#, C++, Kotlin and Scala) where types are nominal types rather than structural ones and, accordingly, where subtyping is a nominal relation, rarely is poset a lattice under the subtyping relation , let alone a complete lattice. Further, many type constructors (i.e., generic classes) in these languages are not covariant. As such, and rarely exist in .^{22}^{22}22Annihilating the possibility of reasoning inductively or coinductively about nominal OO types. Still, the notion of a prefixed point (or of an algebra) of a generic class and the notion of a postfixed point (or of an coalgebra) of , under the names supertype and subtype respectively, do have relevance in OO type theory, e.g., when discussing bounded generics [6].^{23}^{23}23In fact, owing to our research interests (hinted at in the beginning of 4.2), inquiries in [6] have been a main initial motivation for writing this note/paper. In particular, we have noted that if is a generic class in a Java program then a role similar to the role played by the coinductive type is played by the wildcard type F<?>, since, by the subtyping rules of Java (discussed in [6], and illustrated vividly in earlier publications such as [8, 5]), every subtype (i.e., every parameterized type constructed using —called an instantiation of —and every subtype thereof) is a subtype of the type F<?>. On the other hand, in Java there is not a nonNull type (not even type F<Null>; see [6]) that plays a role similar to the role played above by the inductive type (i.e., a type that is a subtype of all supertypes, which are all instantiations of and all supertypes thereof). This means that in Java greatest postfixed points (i.e., greatest subtypes) that are not greatest fixed points do exist, while nonbottom least prefixed points (i.e., least supertypes) do not exist. Also, since is rarely a complete lattice, greatest fixed points, generallyspeaking, do not exist in Java, neither do least fixed points. These same observations apply moreorless to other nominallytyped OOP languages similar to Java, such as C#, C++, Kotlin and Scala. (See further discussion in Footnote 29 of 6.)
References
5 FirstOrder Logic
Via the axiom of comprehension in set theory, firstorder logic—abbr. FOL, and also called predicate calculus—is strongly tied to set theory. As such, in correspondence with the settheoretic concepts presented in 3, one should expect to find counterparts in (firstorder) logic. Even though seemingly unpopular in mathematical logic literature, we try to explore these corresponding concepts in this section. The discussion of these concepts in logic is also a step that prepares for discussing these concepts in 6 in the more general setting of category theory.
Formulation
Let (‘implies’) denote the implication relation between predicates/logical statements of firstorder logic^{24}^{24}24Note that, as in earlier sections, we use the long implication symbol ‘’ to denote the implication relation in the metalogic (i.e., the logic used to reason about objects of interest to us, e.g., points, sets, types, or—as is the case in this section—statements of firstorder logic). The reader in this section should be careful not to confuse the metalogical implication relation (denoted by the long implication symbol ) with the implication relation of firstorder logic (used to reason in FOL, and denoted by the short implication symbol )., and let juxtaposition or (‘is satisfied by/applies to’) denote the satisfiability relation between predicates and objects/elements. Further, let be the set of statements (i.e., the wellformed formulas) of firstorder logic ordered by implication ( is thus a complete lattice) and let be a logical operator over .^{25}^{25}25Such as [and] and [or] (both of which are covariant/monotonic logical operators), [not] (which is contravariant/antimonotonic), or compositions thereof.
A statement is called an ‘weak statement’ if its image implies it, i.e., if
Statement is an weak statement for all endofunctors —in fact is the weakest weak statement for all .
A statement is called an ‘strong statement’ if it implies its image, i.e., if
Statement is an strong statement for all endofunctors —in fact is the strongest strong statement for all .
A statement is called a fixed point (or ‘fixed statement’) of if it is equivalent to its image, i.e., if
As such, a fixed point of is simultaneously an weak statement and an strong statement.
Now, if is a covariant logical operator (i.e., a statementsgenerator), i.e., if
then , the ‘strongest weak statement’, exists in , and is also the ‘strongest fixed point’ of , and , the ‘weakest strong statement’, exists in , and is also the ‘weakest fixed point’ of .^{26}^{26}26See Table 2 for the definitions of and . Further, for any statement we have:

(induction) if , then ,
which, in words, means that if is an weak statement, then implies , and 
(coinduction) if , then ,
which, in words, means that if is an strong statement, then implies .
References
6 Category Theory
Category theory seems to present the most general context in which the notions of induction, coinduction, fixed points, prefixed points (called algebras) and postfixed points (called coalgebras) can be studied.
Formulation
Let (‘is related to’/‘arrow’) denote that two objects in a category are related (i.e., denote the ‘isrelatedto’ relation, or, more concisely, denote relatedness).^{27}^{27}27For mostly historical reasons, an arrow relating two objects in a category is sometimes also called a morphism. We prefer using isrelatedto (i.e., has some relationship with) or arrow instead, since these terms seem to be more easily and intuitively understood, and also because they seem to be more in agreement with the abstract and general nature of category theory. Further, let be the collection of objects of a category (i.e., the category and the collection of its objects are homonyms) and let be^{28}^{28}28Note that, similar to the situation for symbols and that we met in 5, in this section the same exact symbol is used to denote two (stronglyrelated but slightly different) meanings: the first, that two objects in a category are related (which is the meaning specific to category theory), while the second is the functional type of a selfmap/endofunction/endofunctor that acts on objects of interest to us (i.e., points, sets, types, etc.), which is the meaning for that we have been using all along since the beginning of this paper. an endofunctor over .
An object is called an algebra if its image is related to it, i.e., if
An object is called an coalgebra if it is related to its image, i.e., if
Now, if is a covariant endofunctor, i.e., if
and if an initial algebra exists in and a final coalgebra exists in ,^{29}^{29}29While referring to 2, note that a categorytheoretic initial algebra and final coalgebra are not the exact counterparts of an ordertheoretic least fixed point and greatest fixed point but of a least prefixed point and greatest postfixed point. This slight difference is significant, since, for example, a least prefixed point is not necessarily a least fixed point, unless the underlying ordering is a complete lattice and is covariant/monotonic (Exercise: Prove this?), and also a greatest postfixed point is not necessarily a greatest fixed point, unless the underlying ordering is a complete lattice and is covariant/monotonic. The difference is demonstrated, for example, by the subtyping relation in generic nominallytyped OOP (see Footnote 23 in 4.2). Hence, strictly speaking, an initial algebra does not deserve the symbol ‘’ we used as a name for it, nor does a final coalgebra deserve the symbol ‘’ as its name, since the symbols ‘’ and ‘’ are standard names for wellknown concepts (i.e., they are, strictly speaking, reserved for least and greatest fixed points, or exact counterparts of them, respectively, and hence should always imply that the concepts denoted by them are indeed exactly such concepts). then for any object we have:

(induction) if , then ,
which, in words, means that if is an algebra, then is related to (via a unique “complextosimple” arrow called a catamorphism), and 
(coinduction) if , then ,
which, in words, means that if is an coalgebra, then is related to (via a unique “simpletocomplex” arrow called an anamorphism).
Notes

Even though each of an initial algebra and a final coalgebra is simultaneously an algebra and a coalgebra, it should be noted that there is no explicit concept in category theory corresponding to the concept of a fixed point in order theory, due to the general lack of an equality relation in category theory and the use of the isomorphism relation instead.

If such a “fixedness” notion is defined in category theory, it would denote an object that is simultaneously an algebra and a coalgebra, i.e., for a functor , a “fixed object” of will be related to the object and vice versa. This usually means that and , if not the same object, are isomorphic objects. (That is in fact the case for any initial algebra and for any final coalgebra, which—given the uniqueness of arrows from an initial algebra and to a final coalgebra—are indeed isomorphic to their images).


Note also that, unlike the case in order theory, the induction and coinduction principles can be expressed in category theory using a “pointfree style” (as we do in this paper) but they can be also expressed using a “pointwise style”^{30}^{30}30By giving a name to a specific arrow that relates two objects of a category, e.g., using notation such as to mean not only that objects and are related but also that they are related by a particular arrow named .. As such, regarding the possibility of expressing the two principles using either a pointwise style or a pointfree style, category theory agrees more with set theory, type theory, and (first order) logic than it does with order theory.

Incidentally, categories are more similar to preorders (sometimes, but not invariably, also called quasiorders) than they are similar to partialorders (i.e., posets). This is because a category, when viewed as an ordered set, is not necessarily antisymmetric.

Categories are more general than preorders however, since a category can have multiple arrows between any pair of its objects, whereas any two elements/points of a preorder can only either have one “arrow” between the two points (denoted by rather than ) or not have one. (This possible multiplicity is what enables, and sometimes even necessitates, the use of arrow names, so as to distinguish between different arrows when multiple arrows relate the same pair of objects). As such, every preorder is a category, or, more precisely, every preorder has a category—appropriately called a category or a bool(ean)category—corresponding to it. However, generallyspeaking, there is not a unique preorder corresponding to each category.^{31}^{31}31For more on the very strong relation between order theory and category theory, i.e., metaphorically on , see 9. Indeed the relation between the two fields is so precise that the metaphor can be described, formally, as an adjunction between the category of preorders and the category of small categories.

References
7 Comparison Summary
Order Theory  Set Theory  FP Type Theory  
Domain  Points of a Set  =Subsets of a Set  Structural Data Types 
Relation  Abstract Ordering  Inclusion  Inclusion 
Operator  Endofunction  Endofunction  Type Cons. 
Generator  Monotonic  Monotonic  Poly. Type Constr. 
(, and comp.)  
PreF.P.  PreFixed Point  Large Set  Closed Type 
PostF.P.  PostFixed Point  Small Set  Bounded Type 
If Complete Lattice  Smallest Large Set  Smallest Closed Type  
L.F.P.  
(Least Pre.)  ( denotes ‘meet’)  (Inductive Type)  
If Complete Lattice  Largest Small Set  Largest Bounded Type  
G.F.P.  
(Gr. Post.)  ( denotes ‘join’)  (Coinductive Type)  
Induction  
Principle  (i.e.,  (i.e.,  
)  )  
Coinduction  
Principle  (i.e.,  (i.e.,  
)  )  
Domain is  Sometimes  Always  Always 
Comp. Lat.  
Operator is  Sometimes  Sometimes  Always 
Generator 
OOP Type Theory  FirstOrder Logic  Category Theory 
Class Types  Statements  Objects 
Subtyping  Implication  Arrow 
Generic Class  Logical Operator  Endofunctor 
Covariant  Covariant  Covariant 
Supertype  Weak  Algebra 
Subtype  Strong  Coalgebra 
Least Supertype  Strongest Weak Stmt  
If Complete Lattice  Initial Algebra  
( denotes ‘meet’)  ( denotes conjunction)  
Greatest Subtype  Weakest Strong Stmt  
If Complete Lattice  Final Coalgebra  
( denotes ‘join’)  ( denotes disjunction)  
(i.e.,  (i.e.,  (via unique catamorphism 
)  )  from to ) 
(i.e.,  (i.e.,  (via unique anamorphism 
)  )  from to ) 
Rarely  Always  Sometimes 
Sometimes  Sometimes  Sometimes 
8 Structural Type Theory versus Nominal Type Theory
The discussion in 4.1, together with that in 3 and 5, demonstrates that FP type theory, with its structural types and structural subtyping rules being motivated by mathematical reasoning about programs (using induction or coinduction), is closer in its flavor to set theory (and firstorder logic/predicate calculus), since structural type theory assumes and requires the existence of fixed points and in for all type constructors . (For a discussion of the importance of structural typing in FP see [30, 36] and [42, 19.3].)
On the other hand, the discussion in 4.2, together with that in 6 and 2, demonstrates that OOP type theory, with its nominal types and nominal subtyping being motivated by the association of nominal types with behavioral contracts, is closer in its flavor to category theory and order theory, since nominal type theory does not assume or require the existence of fixed points and in for all type constructors . (For a discussion of why nominal typing and nominal subtyping matter in OOP see [2] and [42, 19.3].)
As such, we conclude that the theory of data types of functional programming languages is more similar in its views and its flavor to the views and flavor of set theory and firstorder logic, while the theory of data types of objectoriented programming languages is more similar in its views and its flavor to those of category theory and order theory. This conclusion adds further supporting evidence to our speculation (e.g., in [5, 3]) that category theory is more suited than set theory for the accurate understanding of mainstream objectoriented type systems.
9 A Fundamental and More Abstract Treatment
As we hinted to in 6, order theory and category theory are strongly related. In fact the connection between the two fields goes much, much further than we hinted at.
Closure and Kernel Operators
In order theory a closure operator over a poset is an idempotent extensive generator [19]. An extensive (or inflationary) endofunction over is an endofunction where
meaning that all points of are small (i.e., are postfixed points of ). An endofunction is idempotent iff
i.e., iff , meaning that applying twice does not transform or change an element of any more than applying to the element once does.
Also, in order theory a kernel (or interior) operator over is an idempotent intensive generator. An intensive (or deflationary) endofunction over is an endofunction where
meaning that all points of are large (i.e., are prefixed points of ).^{32}^{32}32It may be helpful here to check Figure 1 again. If in Figure 1 is a closure operator then the upper diamond minus the inner diamond “collapses” (becomes empty), and we have . That is because, since is extensive, we have , and thus —if it exists in —is always a fixed point of , in fact the gfp (greatest fixed point) of . Dually, if in Figure 1 is a kernel operator then the lower diamond minus the inner diamond “collapses”, and we have . That is because, since is intensive, we have , and thus —if it exists in —is always a fixed point of , in fact the lfp (least fixed point) of .
Monads and Comonads
In category theory, on the other hand, when a partiallyordered set is viewed as a category, then a monad on a poset turns out to be exactly a closure operator over [50]. By the definition of monads, all objects of a category are coalgebras of a monad, which translates to all elements of a poset being postfixed points of the corresponding closure operator on (which we noted above). As such, the algebras for this monad correspond to the prefixed points of the closure operator, and thus correspond exactly to its fixed points.
Similarly, a comonad (the dual of a monad) on turns out to be exactly a kernel operator over . As such, by a dual argument, the coalgebras for this comonad are the postfixed points of the kernel operator, and thus also correspond exactly to its fixed points.
An Alternative Presentation
Based on these observations that further relate category theory and order theory, the whole technical discussion in this paper can be presented, more succinctly if also more abstractly, by rewriting it in terms of the very general language of monads/comonads, then specializing the abstract argument by applying it to each of the six categories we are interested in.^{33}^{33}33Namely, (1) category OrdE of partiallyordered sets together with selfmaps, (2) category PSetE of power sets (ordered by inclusion) together with endofunctions, (3) category TypS of structural types (ordered by inclusion/structural subtyping) together with structural type constructors, (4) category TypN of nominal types (ordered by nominal subtyping) together with generic classes, (5) category FOL of firstorder logical statements (ordered by implication) together with logical operators, and (6) category CatE of (small) categories (ordered/related by arrows) together with endofunctors. Given that our goal, however, is to compare the concepts in their most natural and most concrete mathematical contexts, we refrain from presenting such a fundamental treatment here, keeping the possibility of making such a presentation in some future work, e.g., as a separate paper, or as an appendix to future versions of this paper.
References
Acknowledgments
The author would like to thank John Greiner (Rice University) and David Spivak (MIT) for their suggestions and their valuable feedback on earlier versions of this paper.
References
 [1] Moez A. AbdelGawad. A domaintheoretic model of nominallytyped objectoriented programming. Electronic Notes in Theoretical Computer Science (Full version preprint available at http://arxiv.org/abs/1801.06793), 301:3–19, 2014.
 [2] Moez A. AbdelGawad. Why nominaltyping matters in OOP. eprint available at http://arxiv.org/abs/1606.03809, 2016.
 [3] Moez A. AbdelGawad. Novel uses of category theory in modeling OOP (extended abstract). Accepted at The Nordic Workshop on Programming Theory (NWPT’17), Turku, Finland (Full version preprint available at http://arxiv.org/abs/1709.08056), 2017.
 [4] Moez A. AbdelGawad. Objectoriented theorem proving (OOTP): First thoughts. eprint available at http://arxiv.org/abs/1712.09958, 2017.
 [5] Moez A. AbdelGawad. Towards a Java subtyping operad. Proceedings of FTfJP’17, Barcelona, Spain (Extended version preprint available at http://arxiv.org/abs/1706.00274), 2017.
 [6] Moez A. AbdelGawad. Doubly Fbounded generics. eprint available at http://arxiv.org/abs/1808.06052, 2018.
 [7] Moez A. AbdelGawad. Induction, coinduction, and fixed points: A concise survey (and tutorial). eprint available at http://arxiv.org/abs/1812.10026, 2018.
 [8] Moez A. AbdelGawad. Java subtyping as an infinite selfsimilar partial graph product. eprint available at http://arxiv.org/abs/1805.06893, 2018.
 [9] Moez A. AbdelGawad. Towards taming Java wildcards and extending Java with interval types. eprint available at http://arxiv.org/abs/1805.10931, 2018.
 [10] Samson Abramsky and Achim Jung. Domain theory. In Dov M. Gabbay S. Abramsky and T. S. E. Maibaum, editors, Handbook for Logic in Computer Science, volume 3. Clarendon Press, 1994.
 [11] Paolo Baldan, Giorgio Ghelli, and Alessandra Raffaeta. Basic theory of Fbounded polymorphism. Information and Computation, 153(1):173–237, 1999.
 [12] Jon Barwise and Lawrence Moss. Vicious Circles: On the Mathematics of Nonwellfounded Phenomena. Cambridge University Press, 1996.
 [13] Yves Bertot and Pierre Casteran. Interactive Theorem Proving and Program Development Coq’Art: The Calculus of Inductive Constructions. Springer, 2004.
 [14] M. Brandt and F. Henglein. Coinductive axiomatization of recursive type equality and subtyping. Fundamenta Informaticae, 33(4):309–338, 1998.
 [15] Kim Bruce, Luca Cardelli, Giuseppe Castagna, The Hopkins Objects Group, Gary Leavens, and Benjamin C. Pierce. On binary methods. Theory and Practice of Object Systems, 1994.
 [16] Peter S. Canning, William R. Cook, Walter L. Hill, J. Mitchell, and W. Olthoff. Fbounded polymorphism for objectoriented programming. In Proc. of Conf. on Functional Programming Languages and Computer Architecture, 1989.
 [17] Robert Cartwright, Rebecca Parsons, and Moez A. AbdelGawad. Domain Theory: An Introduction. eprint available at http://arxiv.org/abs/1605.05858, 2016.
 [18] William R. Cook. On understanding data abstraction, revisited. volume 44, pages 557–572. ACM, 2009.
 [19] B. A. Davey and H. A. Priestley. Introduction to Lattices and Order. Cambridge University Press, 2nd edition, 2002.
 [20] Herbert B. Enderton. A Mathematical Introduction to Logic. Academic Press, New York, 1972.
 [21] Brendan Fong and David Spivak. Seven Sketches in Compositionality: An Invitation to Applied Category Theory. Draft, 2018.
 [22] Thomas Forster. Logic, Induction, and Sets. London Mathematical Society Student Texts. Cambridge University Press, 2003.
 [23] Vladimir Gapeyev, Michael Y. Levin, and Benjamin C. Pierce. Recursive subtyping revealed. Journal of Functional Programming, 2002.
 [24] G. Gierz, K. H. Hofmann, K. Keimel, J. D. Lawson, M. W. Mislove, and D. S. Scott. Continuous Lattices and Domains, volume 93 of Encyclopedia Of Mathematics And Its Applications. Cambridge University Press, 2003.
 [25] James Gosling, Bill Joy, Guy Steele, Gilad Bracha, Alex Buckley, and Daniel Smith. The Java Language Specification. AddisonWesley, 2018.
 [26] Ben Greenman, Fabian Muehlboeck, and Ross Tate. Getting Fbounded polymorphism into shape. In PLDI ’14: Proceedings of the 2014 ACM SIGPLAN conference on Programming Language Design and Implementation, 2014.
 [27] John Greiner. Programming with inductive and coinductive types. Technical Report CMUCS92109, School of Computer Science, Carnegie Mellon University, Jan 1992.
 [28] C. A. Gunter and Dana S. Scott. Handbook of Theoretical Computer Science, volume B, chapter 12 (Semantic Domains). 1990.
 [29] Douglas R. Hofstadter. Gödel, Escher, Bach: an Eternal Golden Braid. Basic Books, second edition, 1999.
 [30] John Hughes. Why functional programming matters. Computer Journal, 32(2), 1989.
 [31] Gilles Kahn and Gordon D. Plotkin. Concrete domains, May 1993.
 [32] Andrew J. Kennedy and Benjamin C. Pierce. On decidability of nominal subtyping with variance. In International Workshop on Foundations and Developments of ObjectOriented Languages (FOOL/WOOD), 2007.
 [33] B. Knaster. Un thorme sur les fonctions d’ensembles. Ann. Soc. Polon. Math., 6:133–134, 1928.
 [34] Dexter Kozen and Alexandra Silva. Practical coinduction. Mathematical Structures in Computer Science, 27(7):1132–1152, 2016.
 [35] Steven G. Krantz. Handbook of Logic and Proof Techniques for Computer Science. Birkhäuser, 2002.
 [36] David B. MacQueen. Should ML be objectoriented? Formal Aspects of Computing, 13:214–232, 2002.
 [37] David B. MacQueen, Gordon D. Plotkin, and R. Sethi. An ideal model for recursive polymorphic types. Information and Control, 71:95–130, 1986.
 [38] Andrew McLennan. Advanced Fixed Point Theory for Economics. Springer, 2018.
 [39] Martin Odersky. The Scala language specification, v. 2.9. http://www.scalalang.org, 2014.
 [40] Roger Penrose. The Road to Reality: A Complete Guide to the Laws of the Universe. Jonathan Cape, 2004.
 [41] Benjamin C. Pierce. Basic Category Theory for Computer Scientists. MIT Press, 1991.
 [42] Benjamin C. Pierce. Types and Programming Languages. MIT Press, 2002.
 [43] Gordon D. Plotkin. Domains. Lecture notes in advanced domain theory, 1983.
 [44] Steven Roman. Lattices and Ordered Sets. Springer, 2008.
 [45] Kenneth A. Ross and Charles R. B. Wright. Discrete Mathematics. Prentice Hall, third edition, 1992.
 [46] Davide Sangiorgi. Introduction to Bisimulation and Coinduction. 2012.
 [47] Luigi Santocanale. On the equational definition of the least prefixed point. Theoretical Computer Science, 2003.
 [48] Dana S. Scott. Domains for denotational semantics. Technical report, Computer Science Department, Carnegie Mellon University, 1983.
 [49] Joseph R. Shoenfield. Mathematical Logic. AddisonWesley, 1967.
 [50] David Spivak. Category theory for the sciences. MIT Press, 2014.
 [51] Alfred Tarski. A latticetheoretical fixpoint theorem and its applications. Pacific Journal of Mathematics, 5:285–309, 1955.
 [52] Ross Tate, Alan Leung, and Sorin Lerner. Taming wildcards in Java’s type system. PLDI’11, June 4–8, 2011, San Jose, California, USA., 2011.
 [53] Ewan Tempero, Hong Yul Yang, and James Noble. What programmers do with inheritance in Java. In Proceedings of the 27th European Conference on ObjectOriented Programming, ECOOP’13, pages 577–601, Berlin, Heidelberg, 2013. SpringerVerlag.
 [54] Rik van Geldrop and Jaap van der Woude. Inductive sets, the algebraic way, 2009.