Topological Grammars for Data Approximation

by   Alexander N. Gorban, et al.
Musée Curie

A method of topological grammars is proposed for multidimensional data approximation. For data with complex topology we define a principal cubic complex of low dimension and given complexity that gives the best approximation for the dataset. This complex is a generalization of linear and non-linear principal manifolds and includes them as particular cases. The problem of optimal principal complex construction is transformed into a series of minimization problems for quadratic functionals. These quadratic functionals have a physically transparent interpretation in terms of elastic energy. For the energy computation, the whole complex is represented as a system of nodes and springs. Topologically, the principal complex is a product of one-dimensional continuums (represented by graphs), and the grammars describe how these continuums transform during the process of optimal complex construction. This factorization of the whole process onto one-dimensional transformations using minimization of quadratic energy functionals allow us to construct efficient algorithms.



There are no comments yet.


page 1

page 2

page 3

page 4


Robust and scalable learning of data manifolds with complex topologies via ElPiGraph

We present ElPiGraph, a method for approximating data distributions havi...

Principal Graphs and Manifolds

In many physical, statistical, biological and other investigations it is...

Principal manifolds and graphs in practice: from molecular biology to dynamical systems

We present several applications of non-linear data modeling, using princ...

Neural network modeling of data with gaps: method of principal curves, Carleman's formula, and other

A method of modeling data with gaps by a sequence of curves has been dev...

A Coupled Alpha Complex

The alpha complex is a subset of the Delaunay triangulation and is often...

Principal Manifolds of Middles: A Framework and Estimation Procedure Using Mixture Densities

Principal manifolds are used to represent high-dimensional data in a low...

Optimization Study of Hydraulic Power Take-off System for an Ocean Wave Energy Converter

Ocean wave renewable energy is fast becoming a key part of renewable ene...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper, we discuss a classical problem: how to approximate a finite set in for relatively large by a finite subset of a regular low-dimensional object in

. In application, this finite set is a dataset, and this problem arises in many areas: from data visualization to fluid dynamics.

The first hypothesis we have to check is: whether the dataset is situated near a low–dimensional affine manifold (plane) in

. If we look for a point, straight line, plane, … that minimizes the average squared distance to the datapoints, we immediately come to the Principal Component Analysis (PCA). PCA is one of the most seminal inventions in data analysis. Now it is textbook material. Nonlinear generalization of PCA is a great challenge, and many attempts have been made to answer it. Two of them are especially important for our consideration: Kohonen’s Self-Organizing Maps (SOM) and principal manifolds.

With the SOM algorithm [1] we take a finite metric space with metric and try to map it into with (a) the best preservation of initial structure in the image of and (b) the best approximation of the dataset . The SOM algorithm has several setup variables to regulate the compromise between these goals. We start from some initial approximation of the map, . On each (-th) step of the algorithm we have a datapoint and a current approximation . For these and we define an “owner” of in : . The next approximation, , is


Here is a step size, is a monotonically decreasing cutting function. There are many ways to combine steps (1) in the whole algorithm. The idea of SOM is very flexible and seminal, has plenty of applications and generalizations, but, strictly speaking, we don’t know what we are looking for: we have the algorithm, but no independent definition: SOM is a result of the algorithm work. The attempts to define SOM as solution of a minimization problem for some energy functional were not very successful [3].

For a known probability distribution,

principal manifolds were introduced as lines or surfaces passing through “the middle” of the data distribution [2]. This intuitive vision was transformed into the mathematical notion of self-consistency: every point of the principal manifold is a conditional expectation of all points that are projected into . Neither manifold, nor projection should be linear: just a differentiable projection of the data space (usually it is or a domain in ) onto the manifold with the self-consistency requirement for conditional expectations: For a finite dataset , only one or zero datapoints are typically projected into a point of the principal manifold. In order to avoid overfitting, we have to introduce smoothers that become an essential part of the principal manifold construction algorithms.

SOMs give the most popular approximations for principal manifolds: we can take for a fragment of a regular -dimensional grid and consider the resulting SOM as the approximation to the -dimensional principal manifold (see, for example, [4, 5]). Several original algorithms for construction of principal curves [6] and surfaces for finite datasets were developed during last decade, as well as many applications of this idea. In 1996, in a discussion about SOM at the 5th Russian National Seminar in Neuroinformatics, a method of multidimensional data approximation based on elastic energy minimization was proposed (see [7, 8, 9] and the bibliography there). This method is based on the analogy between the principal manifold and the elastic membrane (and plate). Following the metaphor of elasticity, we introduce two quadratic smoothness penalty terms. This allows one to apply standard minimization of quadratic functionals (i.e., solving a system of linear algebraic equations with a sparse matrix).

2 Graph grammars and principal graphs

Let be a simple undirected graph with set of vertices and set of edges . For a -star in is a subgraph with vertices and edges . Suppose for each , a family of -stars in has been selected. We call a graph with selected families of -stars an elastic graph if, for all and , the correspondent elasticity moduli and are defined. Let be vertices of an edge and be vertices of a -star (among them, is the central vertex). For any map the energy of the graph is defined as

Very recently, a simple but important fact was noticed [10]: every system of elastic finite elements could be represented by a system of springs, if we allow some springs to have negative elasticity coefficients. The energy of a -star in with in the center and endpoints is , or, in the spring representation, . Here we have positive springs with coefficients and negative springs with coefficients .

For a given map we divide the dataset into subsets . The set contains the data points for which the node is the closest one in . The energy of approximation is:


where are the point weights.

The simple algorithm for minimization of the energy is the splitting algorithm, in the spirit of the classical -means clustering: for a given system of sets we minimize (it is the minimization of a positive quadratic functional), then for a given we find new , and so on; stop when no change. This algorithm gives a local minimum, and the global minimization problem arises. There are many methods for improving the situation, but without guarantee of the global minimization.

Figure 1: Applying a simple “add a node to a node or bisect an edge” grammar to construct principal elastic trees (one node is added per iteration). Upper row: an example of two-dimensional branching distribution of points. Lower row: the classical benchmark, the “iris” four-dimensional dataset (point shapes distinguish three classes of points), the dataset and principal tree are presented in projection onto the plane of first two principal components.

The next problem is the elastic graph construction. Here we should find a compromise between simplicity of graph topology, simplicity of geometrical form for a given topology, and accuracy of approximation. Geometrical complexity is measured by the graph energy , and the error of approximation is measured by the energy of approximation . Both are included in the energy . Topological complexity will be represented by means of elementary transformations: it is the length of the energetically optimal chain of elementary transformation from a given set applied to initial simple graph.

Graph grammars [11, 12] provide a well-developed formalism for the description of elementary transformations. An elastic graph grammar is presented as a set of production (or substitution) rules. Each rule has a form , where and are elastic graphs. When this rule is applied to an elastic graph, a copy of is removed from the graph together with all its incident edges and is replaced with a copy of with edges that connect to graph. For a full description of this language we need the notion of a labeled graph. Labels are necessary to provide the proper connection between and the graph.

A link in the energetically optimal transformation chain is constructed by finding a transformation application that gives the largest energy descent (after an optimization step), then the next link, and so on, until we achieve the desirable accuracy of approximation, or the limit number of transformations (some other termination criteria are also possible). The selection of an energetically optimal application of transformations by the trial optimization steps is time-consuming. There exist alternative approaches. The preselection of applications for a production rule can be done through comparison of energy of copies of with its incident edges and stars in the transformed graph .

As the simple (but already rather powerful) example we use a system of two transformations: “add a node to a node” and “bisect an edge.” These transformations act on a class of primitive elastic graphs: all non-terminal nodes with edges are centers of elastic k-stars, which form all the -stars of the graph. For a primitive elastic graph, the number of stars is equal to the number of non-terminal nodes – the graph topology prescribes the elastic structure.

The transformation “add a node” can be applied to any vertex of : add a new node and a new edge . The transformation “bisect an edge” is applicable to any pair of graph vertices connected by an edge : Delete edge , add a vertex and two edges, and . The transformation of elastic structure (change in the star list) is induced by the change of topology, because the elastic graph is primitive. This two–transformation grammar with energy minimization builds principal trees (and principal curves, as a particular case) for datasets. A couple of examples are presented on Fig. 1. For applications, it is useful to associate one-dimensional continuums with these principal trees. Such a continuum consists of node images and of pieces of straight lines that connect images of linked nodes.

3 Factorization and transformation of factors

Figure 2: The Cartesian product of graphs.

If we approximate multidimensional data by a -dimensional object, the number of points (or, more general, elements) in this object grows with exponentially. This is an obstacle for grammar–based algorithms even for modest , because for analysis of the rule applications we should investigate all isomorphic copies of in . The natural way to avoid this obstacle is the principal object factorization. Let us represent an elastic graph as a Cartesian product of graphs (Fig. 2). Cartesian products of elastic graphs is an elastic graph with the vertex set . Let and (). For this set of vertices, , a copy of in is defined with vertices (), edges () and, similarly, -stars of the form , where is a -star in . For any there are copies of in . Sets of edges and -stars for Cartesian product are unions of that set through all copies of all factors. A map maps all the copies of factors into too. Energy of the elastic graph product is the energy sum of all factor copies. It is, of course, a quadratic functional of .

The only difference between the construction of general elastic graphs and factorized graphs is in application of transformations. For factorized graphs, we apply them to factors. This approach significantly reduces the amount of trials in selection of optimal application. The simple grammar with two rules, “add a node to a node or bisect an edge,” is also powerful here, it produces products of primitive elastic trees. For such a product, the elastic structure is defined by the topology of the factors.

4 Conclusion: adaptive dimension and principal cubic complexes

In the continuum representation, factors are one-dimension continuums, hence, a product of factors is represented as an -dimensional cubic complex [13] that is glued together from

-dimensional parallelepipeds (“cubes”). Thus, the factorized principal elastic graphs generate a new and, as we can estimate now, a useful construction: a principal cubic complex. One of the obvious benefits from this construction is adaptive dimension: the grammar approach with energy optimization develops the necessary number of non-trivial factors, and not more. These complexes can approximate multidimensional datasets with complex, but still low-dimensional topology. The topology of the complex is not prescribed, but adaptive. In that sense, they are even more flexible than SOMs. The whole approach can be interpreted as a intermediate between absolutely flexible

neural gas [14] and significantly more restrictive elastic map [9]. It includes as simple limit cases the -means clustering algorithm (low elasticity moduli) and classical PCA (high for and for , ).


  • [1] T. Kohonen, Self-organized formation of topologically correct feature maps, Biological Cybernetics 43 (1982), 59–69.
  • [2] T. Hastie,W. Stuetzle, Principal curves, Journal of the American Statistical Association 84(406) (1989), 502–516.
  • [3] E. Erwin, K. Obermayer, K. Schulten, Self-organizing maps: ordering, convergence properties and energy functions, Biological Cybernetics 67 (1992), 47–55.
  • [4] F. Mulier, V. Cherkassky, Self-organization as an iterative kernel smoothing process, Neural Computation 7 (1995), 1165–1177.
  • [5] H. Ritter, T. Martinetz, K. Schulten, Neural Computation and Self-Organizing Maps: An Introduction. Addison-Wesley Reading, Massachusetts, 1992.
  • [6] B. Kégl, A. Krzyzak, Piecewise linear skeletonization using principal curves, IEEE Transactions on Pattern Analysis and Machine Intelligence 24(1) (2002), 59–74.
  • [7]

    A.N. Gorban, A.A. Rossiev, Neural network iterative method of principal curves for data with gaps, Journal of Computer and System Sciences International 38 (5) (1999), 825–831.

  • [8] A. Zinovyev, Visualization of Multidimensional Data, Krasnoyarsk State University Press Publ., 2000, pp. 1–168.
  • [9] A. Gorban, Y. Zinovyev, Elastic Principal Graphs and Manifolds and their Practical Applications, Computing 75 (2005), 359–379.
  • [10] A. Gusev, Finite element mapping for spring network representations of the mechanics of solids, Phys. Rev. Lett. 93(2) (2004), 034302.
  • [11] M. Nagl, Formal languages of labelled graphs, Computing 16 (1976), 113–137
  • [12] M. Löwe, Algebraic approach to single–pushout graph transformation, Theor. Comp. Sci. 109 (1993), 181–224.
  • [13] S. Matveev, M. Polyak, Cubic complexes and finite type invariants, In: Geometry & Topology Monographs, Volume 4: Invariants of knots and 3-manifolds, Kyoto, 2001, 215–233.
  • [14]

    T. M. Martinetz, S. G. Berkovich, and K. J. Schulten. Neural-gas network for vector quantization and its application to time-series prediction. IEEE Transactions on Neural Networks, 4, 4 (1993), 558–569.