Representation and Measure of Structural Information

We introduce a uniform representation of general objects that captures the regularities with respect to their structure. It allows a representation of a general class of objects including geometric patterns and images in a sparse, modular, hierarchical, and recursive manner. The representation can exploit any computable regularity in objects to compactly describe them, while also being capable of representing random objects as raw data. A set of rules uniformly dictates the interpretation of the representation into raw signal, which makes it possible to ask what pattern a given raw signal contains. Also, it allows simple separation of the information that we wish to ignore from that which we measure, by using a set of maps to delineate the a priori parts of the objects, leaving only the information in the structure. Using the representation, we introduce a measure of information in general objects relative to structures defined by the set of maps. We point out that the common prescription of encoding objects by strings to use Kolmogorov complexity is meaningless when, as often is the case, the encoding is not specified in any way other than that it exists. Noting this, we define the measure directly in terms of the structures of the spaces in which the objects reside. As a result, the measure is defined relative to a set of maps that characterize the structures. It turns out that the measure is equivalent to Kolmogorov complexity when it is defined relative to the maps characterizing the structure of natural numbers. Thus, the formulation gives the larger class of objects a meaningful measure of information that generalizes Kolmogorov complexity.

There are no comments yet.

Authors

• 7 publications
• Shape and Positional Geometry of Multi-Object Configurations

In previous work, we introduced a method for modeling a configuration of...
06/01/2017 ∙ by James Damon, et al. ∙ 0

• Modeling Multi-Object Configurations via Medial/Skeletal Linking Structures

We introduce a method for modeling a configuration of objects in 2D or 3...
06/12/2017 ∙ by James Damon, et al. ∙ 0

• Semi-Countable Sets and their Application to Search Problems

We present the concept of the information efficiency of functions as a t...
04/07/2019 ∙ by P. W. Adriaans, et al. ∙ 0

• Computing with Continuous Objects: A Uniform Co-inductive Approach

A uniform approach to computing with infinite objects like real numbers,...
04/11/2020 ∙ by Dieter Spreen, et al. ∙ 0

• GEOMetrics: Exploiting Geometric Structure for Graph-Encoded Objects

Mesh models are a promising approach for encoding the structure of 3D ob...
01/31/2019 ∙ by Edward J. Smith, et al. ∙ 18

• Discovering Pattern Structure Using Differentiable Compositing

Patterns, which are collections of elements arranged in regular or near-...
10/17/2020 ∙ by Pradyumna Reddy, et al. ∙ 0

• The Four Point Permutation Test for Latent Block Structure in Incidence Matrices

Transactional data may be represented as a bipartite graph G:=(L ∪ R, E)...
10/04/2018 ∙ by R W R Darling, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

What is a pattern? There does not seem to be a generally accepted mathematical definition. Intuitively, a pattern is something simpler than it is apparent. For instance, a repetition of a short substring in a longer string is a pattern: the longer string is simpler, or contains less information, than most other strings of the same length. Here, we see a comparison between the apparent size (length in the literal representation) and the “real” amount of information. Formally, this can be stated in terms of Kolmogorov complexity[1, 2, 9, 13, 14]

of the string, which is roughly defined as the length of the shortest input to a universal Turing machine that produces the string. A string can be said to have a pattern if its Kolmogorov complexity is much smaller than its length: strings that can be effectively described in a significantly shorter description than their length have patterns. Our goal in this paper is to formalize this notion in the domain of more general objects than strings.

For instance, consider bitmap images. Ordinary images are much orderly than what is allowed by their representation as an array of colors; if we take a random bitmap out of all that can be represented as a bitmap, it is almost always a white noise, rather than what we would consider an ordinary image. This is similar to the string case where most strings of a given length are random ones that do not have a shorter description than the literal one. What is the corresponding “effective description” of images? Intuitively, it should be a way to describe the image in which ordinary images can be represented more concisely than noise images.

A Turing machine that produces a bitmap does not suffice because, unlike the case of strings, where all strings can be represented precisely as they are, the bitmaps are only approximations of what we consider to be real images: Pixels are artifacts of arbitrary approximation; and we naturally consider bitmaps of various resolutions as the same, if they show the same scene. There would be no problem if it were the case that all important features of an image are independent of the choice of pixelation. However, this is clearly not so: even a notion as simple as that of a line is not so simple to define on bitmaps, especially in such a way a line in one resolution can be converted into a line in another resolution.

Infinite resolution bitmaps, or functions on an image domain that takes values in the color space, seem to be good enough for the literal representation. But then, the objects appearing there are continuous, infinite entities and thus cannot easily be described effectively as, for instance, an output of a Turing machine. Yet intuition tells us that some of these infinite entities contain only finite information, as the extreme cases of “geometric” visual patterns shown in Figure 1.

1.1 Kolmogorov Complexity Covers All?

But surely, one might say, Kolmogorov complexity already covers any domain, since Computer Science teaches us that information can be encoded by strings. That is, we can first fix some standard enumeration of the objects, establishing a one-to-one correspondence between the objects and strings; then we can define the complexity of an object to be the Kolmogorov complexity of the corresponding string. That seems to be where such an inquiry usually stops, content with the notion that essentially we only need to investigate strings.

However, we immediately encounter a few problems.

First, for a class of objects (such as subsets of a Euclidean space) that has a larger cardinality than the set of all strings, we cannot encode all objects by strings; thus we need to give up the one-to-one correspondence. We must either encode only some of the objects, encode (perhaps an infinite number of) multiple objects by each string, or employ some combination of the two approaches. The choice amounts to knowing what to ignore, whether it is some (even most) of the objects that are not encoded, or the difference between the objects that are encoded into the same string. How should we make this choice?

More fundamentally, the resulting measure has little meaning without actually specifying the encoding. Let and be the sets of objects and strings, respectively, and the Kolmogorov complexity of a string . With an encoding , we might call the complexity of object . However, if we do not have some good reason to take a particular , we can equally use the encoding with an arbitrary permutation . This observation renders the definition meaningless without explicitly specified. So the question is: what is the encoding that gives the complexity some meaning? How can we avoid falling into this trap of arbitrariness? It is certainly not enough just to say that it can be encoded.

With strings, we can choose the identity map as , which gives as much meaning as . In other cases, however, we need to specify , with at least some justification. If we insist encoding objects into strings, we need to define a concrete encoding for each class of objects.

Another problem of measuring the information solely through Kolmogorov complexity is that we cannot easily ignore the part of information we do not care. For instance, we may try to represent a point in the Euclidean plane by identifying the space with , i.e., by a pair of real coordinates, and then encoding them by strings. However, a single real number can contain an arbitrarily large amount of information. Thus, in this representation, a single point can have an arbitrarily large information when encoded by a string. That is certainly not what we want here. Thus, an important part of the encoding is specifying the part of the information we wish to ignore. But we cannot simply delete such information in the encoding process, since it may be needed to identify and measure the regularities in the structure later. If we insist that the computation be carried out strictly in the domain of strings, ’s output must contain all the information in the points. But after the information has been converted into strings, how do we specify which part of the information should be ignored?

It would be much better if we can define the notion of computation, such as compression and pattern finding, directly in terms of the objects we deal with. What we offer in this paper is a meta definition of for multiple classes of objects by specifying how to embed computations in larger spaces. Central to the formalism is the representation of objects that offers the means to specify the information in individual elements that should be ignored, while using that very information to find the structures, in which we wish to measure the amount of information.

Thus, the central problem is that of encoding, or representation, of objects. The paradigm to measure information through computation is the same; the difference is where the computation takes place. This question of encoding seems to have suffered a neglect which, in our belief, has prevented us from formulating a notion of information in objects that have not already been encoded in a convenient way. We discuss this further in Section 7.

1.2 Motivation

Our motivation for asking this question stems from the desire to model perception. For perception, there needs to be a large amount of prior knowledge stored in the perceiver, because perception is an inherently ill-posed problem. Perception is a process in which the configuration of the signal source is recovered from a signal, as in recovering a three-dimensional scene from an image.

The problem is that, given the signal, there are usually infinitely many possible source configurations. Without a preference of possible source configuration on the side of the perceiver, there is no reason to choose one possibility over another. For instance, our visual system has a great capability to organize the visual signal into interpretable shapes, like making sense of the famous Dalmatian photo by R. C. James in [5]. To model such a system, it is not enough to know what the possible configurations of the signal source are; we need to know in advance how likely we are going to encounter each of them.

However, even putting aside the problem of estimating the probabilities, just storing and retrieving the data is impossible unless we have a very good way to compress the data; for instance, if we store the possible shape of surfaces as an array of 10 possible heights at each of

positions, the number of possible surfaces would be

. The way this problem has been dealt with is by estimating the probabilities by looking at specific characteristics of the possible surface. For instance, the surface smoothness can be computed from a given description of the surface; we can then decide, for instance, that the smoother the surface is, the higher the probability. Indeed, the area of computer vision and pattern recognition is full of such heuristics. Even when machine learning techniques are used, the variables to be learned must be carefully chosen because we cannot simply learn all possible surfaces.

Our desire to have a measure of information originates from the wish to have a principle for automatically deciding which quantity to look at and which combinations of variables to learn, because we consider it reasonable for a perceiving entity to look first for simpler patterns in the signal, as well as because of the demand of storage efficiency. That is, if we have a measure of simplicity of general visual objects, we can say, for instance, that the probability is proportional to the simplicity, or use machine learning techniques to learn the probabilities that such simpler patterns appear.

In more general terms, this is a problem of inductive inference and modeling: we inductively seek a model of the world that best explains the data. There are theories that treat such a problem, and among them are ones with the spirit we describe above. For instance, the Minimum Description Length (MDL) principle[11] advocates Occam’s Razor. Among models that equally fit the data, it chooses the one that is the “simplest” in the sense that it allows for a shorter description of the data. However, crucially missing from this theory is the problem of representation. The MDL theory only deals with strings as the data and does not say how the objects should be described by strings. It is a good principle for people trying to deal with individual problems they understand; but when it comes to dealing with general objects, it lacks the mathematical concreteness needed to program the principle itself into machines.

Also, as perceiving entities, we seem to have more interest in the finite part of the data. One may even say that we can only perceive the finite information out of any infinitely rich source of information, on the basis that our capacity of representation is presumably finite. For instance, if we see a white noise image, we do not perceive the amount of information that can be encoded in such an image. Instead, we glean the information that we can; we might just note that it is a white noise, or if it is a video we would recognize that the noise is constantly changing, and so on. If we see the three images in Figure 2, which are the same pattern with different noise added, we do not discriminate among them. Even though as raw bitmaps they are quite different, we perceive almost nothing about the noise except for its presence; we just recognize the pattern of the lines as the same and notice that there are some noise. Thus, to model the perception, we need a way to recognize the part of an infinite signal that represents finite but useful information. This is why we are especially interested in inherently finite structures whose literal manifestations are infinite.

The human visual system seems to have “the ability to impose organization on sensory data—to discover regularity, coherence, continuity, etc., on many levels,” which is “apart from both the perception of tri-dimensionality and from the recognition of familiar objects[15].” We agree that such structure and organization that appears at every level is the key to modeling vision and perception in general. One purpose of this work is to provide a language to express the perceptual organization that enables us to implement the ability to impose it on the data.

1.3 Desiderata

What we seek is a description, or representation, of general objects with the following properties:

1. General: It can represent a general class of objects and all objects in the class, including a part of the descriptions themselves, allowing hierarchical description.

2. Uniform: It represents the objects in a uniform way by simple rules.

3. Reflect complexity: The intuitive complexity, or the amount of information in the object, corresponds to the complexity of the description. In particular, intuitively finite object has a finite description.

4. Grounded: There is a set of rules that applies to the whole class of objects, not depending on the instance of the object, dictating how the described objects are related to the raw signal.

In the case of strings, the literal representation satisfies I and II: representing a string as a string is obviously general enough to represent any string, and the representation is uniform for any string. Fixing a universal Turing machine , one can consider a program for as a description of a string if causes the machine to halt after writing out the string on the tape. Intuitively simple string would have a shorter program. Also, describing strings by other strings automatically satisfy the describability of descriptions and the groundedness. Thus this description would satisfy all of the desiderata.

In the case of images, we can think of a function on a rectangle in as the literal representation, satisfying the desiderata I and II. But we do not know of a representation that satisfies all of the desiderata. Perhaps the closest is the page description languages like PostScript, possibly modified to allow infinite precision. However, it has too many primitives to be convenient for mathematical treatment. Also, the uniformity and simplicity of the rules of description is important not only for the sake of mathematical convenience, but also because we aim eventually to develop a way of automatically extract such description from the literal description, or the signal. More crucially, PostScript is not general enough: its class of objects is limited to two dimensional pages. The applicability to more general objects than just images is important because we would like to find a description that reflects the structure within more abstract data than a two-dimensional page, especially the description itself. For instance, if we have a way to describe circles by center points and radii, we would have a three-dimensional space of circles. We would like to use the same uniform description to describe a group of the 3D points corresponding to the circles. Thus, allowing “describing the description” is crucial in order to allow efficient description of repetitive, hierarchical, and more general structures.

The groundedness requirement is needed to treat general structures in a uniform way. When we say that some data represents some object, we implicitly assume a set of rules for data interpretation and manipulation. It is this set of rules that gives the structure to the object. It is like a machine with knobs and buttons to control it: knowing their settings may be enough to determine the state of the machine; but to describe the effect and interaction of the machine with the environment and other machines, we need more than the internal parameters. If the rules are ad hoc, varying from one instance of representation to another, it would be impossible to formulate the notion of general structures and describe the manipulation of and interaction between such structures.

All the desiderata are related to each other. In particular, we emphasize the following: it is not enough that the simple objects correspond to less data (III); the correspondence must be obtainable from the representation (IV) in a uniform (II) way that applies to the whole general class (I) of objects. For instance, we can say a pair of a point and a real number represents a circle by regarding them as the center and the radius. Or we can say that the pair represents a line by regarding it as a point the line goes through and its angular direction. But for the representation to cover both cases, one must also include in the representation some data specifying which case it is for each object. Such data quickly adds up when one wants to represent various shapes; so when we say that general shapes are described in a representation, it has not only to cover all the shapes but also include the necessary data in a way any shape represented can be converted into a common, literal representation.

1.4 Related Work

The General Pattern Theory[6, 7, 8] is an effort to provide an algebraic framework for describing patterns as structures. It defines a vocabulary which is manipulated to cast the concept of pattern in a precise algebraic language. While it has detailed algebraic and statistical theories with many examples, we only discuss here the part that deals with the representation of patterns. The representation is based on graphs. A graph is fixed; each of its node can be assigned one of generators, whose set is predetermined; a restriction as to which combination of generators can be assigned to the nodes is defined as a set of pairwise restrictions corresponding to the edges of the graph. There are numerous examples in the literature showing that this representation can be used to represent many classes of objects.

The Syntactic Pattern Recognition[4] also represents patterns in a way that explicitly handles the interrelationships between the parts that make up the whole of the object, and use the explicit structure in patterns to recognize them. The representation is either by a formal language or a graph.

Neither of the representations used in the two formalisms is satisfactory for us. The crucial problem is that they are not grounded in the sense above. They are not uniform from a class of objects to another; thus, although we can talk about the information in objects in each class, there is no way to compare them across the classes. They are general in the sense one can adopt them to many different classes of objects, but they are not general enough to represent all of the classes in the same uniform way. They cannot be used, for instance, to define what patterns are, because there is no prescribed way for the representation to be connected to general enough class of objects. There is no formal way to give a raw data and ask what pattern it might form.

Besides the Kolmogorov complexity we already mentioned, there are many notions of the complexity of objects. Most of them are concerned about the complexity of objects that do not include what we deal with in this paper. Also, note that what we define in this paper is a measure of information like Kolmogorov complexity and Shannon information[12], rather than a measure of complexity such as the computational complexity. We refer the reader to the appendix of [3] for an overview of formulations of complexity with an extensive bibliography.

1.5 Overview of the Representation

In this paper, we introduce a representation that fulfills all the desiderata above. Here, we give an overview of its definition and some of its properties.

We assume that the objects are given a priori as subsets of some sets. Those that can be thought of in this way forms a very general class that seems to include most, if not all, objects we might deal with. For example, we can think of a binary string as a subset of ; an image can be thought of as a subset of the product of the image plane and the color space, i.e., the graph of the image function on the image plane. A physical object like a bicycle or an automobile, at one level of abstraction, can be thought of as a subset of , where is the 3D Euclidean space and the set of materials, e.g., glass, iron, rubber, etc.; the subset consists of , with the material that occupies the point in . We call this representation of objects as subsets the ground representation; it serves as a signal-level, literal representation. It is simple to represent something in this way; we can then ask the amount of information therein.

The ground representation is an abstraction of the kind of data representation that we call the dense representation, which includes strings, bitmaps, and other raw data. It corresponds to representing a string as itself. One property of dense representation is that the presence of regularities does not affect it. For instance, any image can be represented as a bitmap in exactly the same manner, whether it is a regular image or a white noise. Another type of representation, which we call the sparse representation, utilizes regularities in the object to describe it. In the image example, if it is an image of geometric objects, it should take very little data to describe it, at least in principle; a circle, for instance, can be represented just by specifying its center point and radius. The same kind of description cannot be used to describe a white noise.

An important feature of the representation proposed in this paper is that it can interpolate between the dense and sparse representation, so that it can take advantage of regularity in the data while also being capable of representing any, even random, data. This is similar to using an input to a Turing machine to represent strings.

As the vocabulary to describe the regularities in such objects, we use the maps that characterize the space in which the objects are included as subsets. Maps characterize the structure of spaces in the following sense. Any two sets with the same cardinality are the same sets in absence of other characteristics. For instance, is the same set in this sense as

, i.e., there exists a one-to-one map between them, if we disregard the structures such as the topology, the vector space structure, the metric structure, the order, and the algebraic structure. These structures can be characterized by maps. For instance, the metric structure is defined by the distance map that gives the distance between two elements of the set; the order is given by a predicate on a pair of elements that returns

true if the first element is less than the second. The two sets are different when we consider the structures because the one-to-one map does not commute with the maps that define the structures.

Using such structure maps to describe regularity, objects with regularities can be represented through sparse parameters in the proposed representation. For a given object, there can be many different ways of representing it, just as there can be any number of inputs to a Turing machine producing the same string. Importantly, there is a prescribed way to connect the description to the ground representation; thus, the representation is grounded. While taking structures into account so that regular objects can be represented as such, it automatically provides an interpretation of each represented object into the signal level. In other words, the relationship between the parameters and the data is part of the representation. Thus, we can give our data in the ground representation and then ask what sparser, more structured representation is possible.

Let us be slightly more concrete. In the proposed representation, we take a number of sets and maps between them (that are to be composed by the structure maps), which we call a diagram. Then we call an assignment of a subset to each set in the diagram its cross section. A cross section must satisfy a certain constraints, because of which we can uniquely determine all the subsets by specifying only a partial cross section, which assigns subsets to only some of the sets in the diagram. If one of the subsets coincides with the ground representation of the object in question, we say that it is represented by the diagram and the partial cross section.

The maps define the structures we take into consideration, which determine the regularities, which in turn allow more concise description of the object than the literal one. Since the representation by diagrams and cross sections is explicitly in terms of the maps in the diagram, it is apparent from the diagram exactly what structure is taken into account.

Some of the properties of the representation are as follows. It is:

1. Sparse: Unlike dense representations such as bitmap, it is capable of representing objects by a combination of their essential structure and instance-specific parameters. The diagram expresses the essential structure while the partial cross section represents the parameters. Implementation-dependent approximation of representation only affects the parameters and thus can be separated from the structure. The sparseness also makes it flexible and easy to manipulate. By modifying the parameters, different instances of the same structure can be easily represented. Also, comparison of two patterns having the same structure is naturally defined.

2. Modular: Parts of the representation can be understood as the modules to construct larger and more complex ones. Complex combinations can be obtained hierarchically and recursively as well as by simple union and intersection.

3. Hierarchical: Because it can be applied to any data, it can also be applied to the parameter space parametrizing some other structure, leading to a hierarchical representation.

4. Recursive: It can represent a recursively defined structure, making it particularly powerful in, for instance, representing repeated patterns. The “repeat” can be in various spaces that can manifest in the final pattern in non-obvious ways.

Finally, diagrams and their cross sections can represent maps between powersets. In fact, the representation of subsets can be considered a special case where the map sends a trivial set to the subset. Any computation, in particular, can also be represented.

1.6 Measure of Structural Information

Using the representation, we introduce a measure of information. Roughly speaking, it is defined as the size of the smallest diagram representing the object, where the diagram can contain only those maps that can be composed by a set of given structure maps, including constant maps.

Thus, the measure is relative to the structure and constants expressed explicitly in the form of maps. The explicit incorporation of the structure of the object space is the key to avoiding the trap of arbitrariness. The patterns such as shown in Figure 1 all have finite information according to the measure. The measure is relative to the constants because of the aforementioned need to separate the information in the structure from that in infinite objects such as real numbers.

Because of the reasons laid out in 1.1, we do not follow the recipe of interfacing Kolmogorov complexity by encoding objects by strings; strings are given no special status in this theory. Instead, we define it directly in terms of the structure of the spaces in which the objects reside. As such, the new measure does not depend on Kolmogorov complexity. Since the class of applicable objects includes that of strings, however, it raises a question of their relationship. It turns out that the new measure is equivalent to Kolmogorov complexity in the case where strings are characterized by the structure of natural numbers given by the constant and the successor function. Note that this is not obvious a priori: the definition of the representation and the measure does not even mention strings. Also, the new measure is defined relative to structure maps. It is equivalent to Kolmogorov complexity when it is defined relative to this particular set of structure maps; relative to other sets, it may not be. If the set includes the constant maps of all strings, for example, any string’s information would be 1.

Thus, the new measure gives the larger class of objects a meaningful measure of information that generalizes Kolmogorov complexity.

The rest of the paper is organized as follows. In the next section, we define the notion of diagrams and their cross sections precisely, as well as what is meant by representing with them. We also list the notations used throughout this paper. In section 3, we illustrate some properties of the representation by geometric examples. In section 4, we give more examples, this time those representing computations. In section 5, we define the information measure of structure of general objects. In section 6, we prove that the measure generalizes Kolmogorov complexity. In section 7, we further discuss the difference between our approach and the string-centered one, before concluding.

2 Representation by Diagrams and Cross Sections

2.1 Definitions

We fix the notation for standard finite sets as , etc. The set is also used as the set of Boolean values, meaning false and true. We mean by that is a map from to . We denote the set of all subsets of (the power set of ) by . The map from to that maps to is denoted by the same letter . We call it a constant map.

Definition 1.

Let be a family of sets indexed by a set . A cross section of is an assignment to each set in of its subset .

In other words, a cross section of is another family of sets indexed by such that for all . We used the index set to make clear that there can be multiple members of the family that are identical as sets; however, we avoid the use of indices almost entirely in this paper. We use the set-theoretic notation with such as . The equality of two members of means that their indices are the same; if the indices are different, we treat them as different, even if they are identical as sets. When we discuss a set in and a cross section of , denotes the subset assigned to by . Thus, assigns each its subset .

We denote the set of cross sections of by . Let be a subfamily of . A cross section of is called a partial cross section of . For a cross section of , the cross section of that assigns to in is called the restriction of to , denoted by . For a cross section of , we denote the set of cross sections of that restrict to by .

Definition 2.

A diagram is a triple of a family of sets, its subfamily , and a family of maps of the form , with , where are index sets.

A diagram such that both and are finite is called a finite diagram. Let be a diagram. There are maps and such that for . Also, define the maps and so that and for .

Definition 3.

The cross section of a diagram is a cross section of such that, for any with , the following holds:

 s(S) =⋂φ∈in(S)φ(s(dm(φ)))if S∈S∖S′, (1) s(S) =⋃φ∈in(S)φ(s(dm(φ)))if S∈S′. (2)

In diagram , the subfamily of specifies the sets for which a cross section should satisfy (2) instead of (1); this means that the cross section on that set should be the union, rather than the intersection, of the images by the incoming maps. We denote the set of cross sections of diagram by . We also define for and ; i.e., is the set of cross sections of diagram that restrict to the cross section of subfamily of .

To illustrate the definitions by example, suppose , , , and with

 w:21→2W,φ:2Y→2X,ψ:2Z→2Y, η:2W→2Y,δ:2W→2X,κ:2X→2W,

where is an element of ; the same letter denotes a constant map. We denote the diagram as follows:

 (3)

For instance, maps each subset of to a subset of . We omit the set from the diagram: a constant map is shown as an incoming arrow, without the domain . Note also that the arrows have dotted shafts, which signifies that the map is between power sets. The parenthesized subscript numbers are for reference: as more than one sets in the family can be identical as sets, we use these to refer to them. We always use to mean the set with the subscript in the diagram under discussion. Thus, if we are discussing the one in equation (3), means the set and means , etc. Also, there are two kinds of arrowheads: the ordinary arrows and round ones. An arrow has the round arrowhead if and only if it is coming into a set in , in this case . For a cross section of this diagram, we have, e.g.,

A diagram and its partial cross section represents an object in the following sense.

Definition 4.

Let be a diagram, a set in , a subfamily of , and a cross section of . Suppose an object is represented in the ground representation as a subset of . The object is said to be represented by if for every cross section in .

The ground representation is a special case of this as a trivial representation; just take , and . Thus the representation is general enough to include all dense representation. The aim, however, is to enable more efficient representation that captures the structure.

Here, we also define the concepts of minimality and limit for later use.

Definition 5.

Let be a diagram, , and . A cross section such that no other gives is said to be minimal on in . We denote the set of such cross sections by .

Note that for : if , then for any since and ; thus . Since it is also the case that , by symmetry it follows that .

Definition 6.

Let be a diagram, , and . Furthermore, let be a finite number of sets in . A subset of is said to be represented by the data as a limit if for any cross section in .

2.2 Notations

Here we list some more notations used in this paper.

1. For any set , denotes the identity map on and the unique map from to . The complement map is defined for by:

 cmpl(A)=cA=X∖A. (4)
2. The product map of maps is defined by . Given a map and a constant map , one can construct a product map:

 f×(z∘ω):X→Y×Z

of and . By abuse of notation, we denote this map by . Similarly, we mix maps of the form and freely in making a product map.

3. For a Cartesian product , the map

 πi:X1×X2×⋯×Xn→Xi

is the projection to the ’th component. We use a shorthand for the product map , for , and so on.

4. For a disjoint union , the map

 ιi:Xi→X1+X2+⋯+Xn

is the injection from the ’th component.

5. The map union of maps and is defined by if and if .

6. For a map , we denote by the same letter the map between the power sets defined by for .

7. For a map , the map is defined by for . By a slight abuse of notation, by for we mean .

8. For a map , denotes . For a positive integer , denotes the map defined as applying for times as well as the map defined as in vi). When is a negative integer, denotes the map defined as applying for times.

3 Geometric Patterns

Using diagrams and cross sections, we can represent geometric objects in a uniform and compact way. In this section, we introduce the representation and discuss its properties using examples.

3.1 Examples

As the simplest example, we consider a circle in the Euclidean plane . Let us denote the vector space of translations in by . Also, denote the map that sends to by , and the map that gives the length of a vector by .

Consider the following diagram:

 (5) X(3)\ar@.>[r](.4)π2−1 X×X(4)\ar@.>[r](.55)π1 X(5)

This denotes a diagram with

 S=(S1,⋯,S5),S′=∅,S1=R,S2=V,S3=S5=X,S4=X×X

and

Note that, while the inverse maps are indicated by , the power map in the forward direction is to be surmised from the convention that the map is between powersets.

Suppose that and that its cross section is defined by

 t(S1)={r},t(S3)={p},

where is a positive real number and is a point in the Euclidean plane . Let be a cross section in . Then, by (1), we have

 s(S2) =len−1(s(S1))=len−1(t(S1))=len−1({r})={v∈V|len(v)=r}, s(S4) =sub−1(s(S2))∩π2−1(s(S3)) ={(x,y)∈X×X|x−y∈s(S2),y∈s(S3)}, s(S5) ={π1((x,y))∈X|(x,y)∈s(S4)} ={x∈X|x−y∈s(S2),y∈s(S3)} ={x∈X|len(x−p)=r}.

Thus the cross section is completely determined and is the set of the points on the circle centered at with radius . In this way, represents the circle.

If , it represents two circles with the same radius centered at and . Thus, we can think of as the space of centers of the circles. If instead, it would represent two concentric circles with radii and . If we modify the diagram to

and let and define by , then we have

 s(S2) ={(v,x)∈V×X|(len(v),x)∈t(S1)}, s(S3) ={(x,y)∈X×X|(x−y,y)∈s(S2)}, s(S4) ={x∈X|∃y∈X,(len(x−y),y)∈t(S1)},

and we have as the circles specified by the radius-center pairs in .

For another example, a line in can be represented using the following diagram:

 (6) X(3)\ar@.>[r](.4)π2−1 X×X(4)\ar@.>[r]π1 X(5)

Here, is the scalar multiplication. Other maps are as above. Suppose that and that its cross section is defined by

 t(S1)={v},t(S3)={p},

where is a point in the Euclidean plane and is a vector in . Let be a cross section in . Then, from (1) we have

 s(S2) =π1−1({v})={(v,c)∈V×R|c∈R}, s(S4) =sub−1(mult(s(S2)))∩π2−1({p}) =sub−1({cv∈V|c∈R})∩π2−1({p}) ={(x,p)∈X×X|∃c∈R,x−p=cv}, s(S5) ={x∈X|∃c∈R,x−p=cv} ={p+cv|c∈R}.

Thus, the cross section is completely determined and consists of the points on the line that goes through and has the direction parallel to .

3.2 Union

As mentioned in 2.1,

denotes the case when (1) in Definition 3 is required, i.e., . Any cross section of the diagram satisfies . To denote the other case, we use

 (7)

to indicate that and . Thus, for any set in , incoming maps are depicted with the round arrow.

In the examples, we may use two kinds of incoming arrows as:

 S4\ar@.)[ru]θ S5\ar@.>[lu]η

It means , i.e., we take the unions first, and then the intersection. This is simply an abbreviation of

 S4\ar@.)[ru]θ S2 S5\ar@.>[l]η

If we allow the complement map in diagrams, we need only one of the conditions in Definition 3, because we can make unions from intersections or vice versa. Using ,

 (8) S2
 s(S4) =cϕ(s(S1))∩cψ(s(S3)) s(S2) =cs(S4)=ϕ(s(S1))∪ψ(s(S3)).

Thus, (8) is equivalent to (7).

3.3 Representing maps

A diagram with partial cross section can represent a map in the following sense:

Definition 7.

A map is said to be represented by if is a diagram, , , , and every cross section in satisfies .

As an example, let us represent the map that maps a subset of a Euclidean space to the topological closure of in . Consider the diagram :

 R(6)\ar@.>[ru]π1−1 X(7)

Here, the map maps to ; thus, for each that appears in , there is an element in , where is the infinimum of the set of real numbers that appear as in . Now, if , and , we have

 s(S2) ={(x,y)|x∈A,y∈X} s(S4) ={(d,y)|y∈X,∃x∈A,len(x−y)=d} s(S7) ={y∈X|infx∈Alen(x−y)=0}=¯A.

Thus represents .

The infinimum map in turn can be represented by

 2(5)\ar@.>[u]π2−1 R×X(6)

with . The map maps to if and otherwise, while the map maps to . Then if and we have

 s(S2) ={(a,b,x)|(a,x)∈B,b∈R}, s(S3) ={(b,1,x)|∃(a,x)∈B,a

Thus represents .

Finally, the maximum map can be represented by

 R×R×X(3)\ar@.>[r]π1×(lt∘π12)×π3 R×2×X(4)\ar@.>[u]cmpl∘π13 2(5)\ar@.>[l](.3)π2−1

with . Then if and we have

 s(S3) ={(a,b,x)|(a,x),(b,x)∈B}, s(S4) ={(a,1,x)|(a,x)∈B,∃(b,x)∈B,a

Thus represents .

3.4 Recursive Definition

Consider the following diagram:

 (9) X(3)\ar@.)[r]id

Here, is the parallel translation in the Euclidean space .

Suppose that and that its cross section is defined by

 t(S1)={v},t(S3)={p}, (10)

where is a point in the Euclidean plane and is a vector in . Let be a cross section in . Then, from (1) we have

 s(S2) ={(x,w)|x∈s(S4),w∈s(S1)}, s(S4) ={p}∪{x+w|(x,w)∈s(S2)} ={p}∪{x+w|x∈s(S4),w∈s(S1)} (11)

From (11), clearly , , , , , i.e., contains equally spaced points beginning at and separated by . However, this does not uniquely determine the cross section: for instance, we can define ; or indeed any set that is the union of and a set invariant under the translation by .

To make it unique, we can take . Then it only contains the cross section with .

Or we can use the following proposition. Let denote the set of natural numbers.

Proposition 1.

Suppose that a set has a “grading” function and let denote for . Consider a map that satisfies, for ,

 η(Si)⊂Si+1,η(S)=∞⋃n=0η(Sn).

If can be written , then

 S=∞⋃n=0ηn(S0).
Proof.

Since if , for with , and if . Thus follows from

 S=S0∪η(S)=S0∪∞⋃n=0η(Sn).

Thus . Since , it follows . Therefore, . The proposition follows from

 S=∞⋃n=0Sn.

To use Proposition 1, we modify (9) as:

 (12) X×N(3)\ar@.)[r]id X(5)

and define by as well as modifying (10) to Then (11) becomes

 s(S4) =s(S3)∪φ(s(S2)) ={(p,0)}∪{(x+w,k+1)|(x,k)∈s(S4),w∈s(S1)}. (13)

We define by and by

 η(A)={(x+w,k+1)|(x,k)∈A,w∈s(S1)}.

Then and clearly satisfy the condition of Proposition 1. Thus it follows from (13) and the proposition that

 s(S4) =∞⋃n=0η({(p,0)}) ={(p,0),(p+v,1),(p+2v,2),(p+3v,3),⋯}.

Thus contains only this cross section with , and is represented by .

If we set , then

 s(S4)={(p,0)