On the Reconstruction of Static and Dynamic Discrete Structures

We study inverse problems of reconstructing static and dynamic discrete structures from tomographic data (with a special focus on the `classical' task of reconstructing finite point sets in R^d). The main emphasis is on recent mathematical developments and new applications, which emerge in scientific areas such as physics and materials science, but also in inner mathematical fields such as number theory, optimization, and imaging. Along with a concise introduction to the field of discrete tomography, we give pointers to related aspects of computerized tomography in order to contrast the worlds of continuous and discrete inverse problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 14

page 19

07/20/2020

Image reconstruction in dynamic inverse problems with temporal models

The paper surveys variational approaches for image reconstruction in dyn...
12/12/2017

Dynamic Discrete Tomography

We consider the problem of reconstructing the paths of a set of points o...
04/13/2022

Utilizing variational autoencoders in the Bayesian inverse problem of photoacoustic tomography

There has been an increasing interest in utilizing machine learning meth...
05/10/2021

The Modulo Radon Transform: Theory, Algorithms and Applications

Recently, experiments have been reported where researchers were able to ...
05/02/2021

Bridging and Improving Theoretical and Computational Electric Impedance Tomography via Data Completion

In computational PDE-based inverse problems, a finite amount of data is ...
12/21/2021

Dynamical Programming for off-the-grid dynamic Inverse Problems

In this work we consider algorithms for reconstructing time-varying data...
12/19/2021

Information Field Theory as Artificial Intelligence

Information field theory (IFT), the information theory for fields, is a ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

We begin with an informal definition of the general field of discrete tomography. As a comprehensive treatise of the general field would, however, go far beyond the scope of the present paper, and as we want to limit the overlap with surveys in the literature as much as possible, we will focus on the most fundamental case of reconstructing finite point sets in from some of their discrete X-rays. Our special emphasis will be on (a subjective selection of) recent developments and applications. We conclude the introduction with a few comments on the structure of the paper and its bibliography.

1.1. What is discrete tomography?

Discrete tomography deals with the problem of retrieving knowledge about an otherwise unknown discrete object from information about its interactions with certain query sets. Of course, this is not a formal definition and, as a matter of fact, the occuring terms leave room for different interpretations. For instance, the discrete object can be any set in some that allows a finite encoding, e.g., a finite point set, a polytope or even a semialgebraic set. Also functions with finite support are included. Knowledge may mean full reconstruction, the detection of certain properties and measures of the object or just a ‘yes/no’ decision whether the object equals (or is close to) a given blueprint. The query sets may be windows, affine spaces or certain families of more general manifolds, and interaction may simply mean intersection but could also refer to a very different probing procedure.

While results which, in retrospective, belong to this area go back a long time, the name discrete tomography and the establishment of the so-named field is of more recent origin. In the past decades the focus has been on the issues of uniqueness, computational complexity, and algorithms, first under the theoretical assumption that exact X-ray data were available. Later, stability and instability questions were pursued, and the effect of noise was studied.

Discrete tomography has important applications in physics, materials science, and many other fields. It has, however, also been applied in various other contexts, including scheduling [101], data security [112], image processing [127], data compression [126], combinatorics [53, 81, 92, 95], and graph theory [89, 90]. Closely connected are also several recreational games such as nonograms [75], path puzzles [78], sudokus [87], and color pic-a-pix [70].

1.2. Scope of the present paper

In the following we will concentrate mainly on the ‘classical’ task of reconstructing a finite point set in from the cardinalities of its intersections with the lines parallel to a (small) finite number of directions. Already in this restricted form, discrete tomography displays important features also known from the continuous world. In particular, discrete tomography is ill-posed in the sense of Hadamard [29]: the data may be inconsistent, the solution need not be unique (Thm. 1), and small changes in the data may result in dramatic changes of the solutions (Thm. 12).

As will become clear, discrete tomography is not simply the discretization of continuous tomography. It derives its special characteristics from the facts that, on the one hand, there are only data in very few directions available, but on the other hand, the classes of objects that have to be reconstructed are rather restricted. Therefore discrete tomography is based on methods from combinatorics, discrete optimization, algebra, number theory, and other more discrete subfields of mathematics and computer science.

There exist already books and articles, which give detailed accounts of various aspects of discrete tomography and its applications. We single out [21, 25, 26, 28, 36, 37, 52]. The present article differs from these surveys in various ways: the mathematical focus will be on recent developments (which have not been covered in previous surveys). Further, new applications will play a significant role, i.e., applications to other scientific areas like physics and materials science, but also to inner mathematical fields such as number theory, optimization, and imaging. We will, moreover, include pointers to certain related aspects of computerized tomography in order to contrast the worlds of continuous and discrete inverse problems. As general sources for the continuous case and inverse problems, see [33, 42, 44, 45] and [7, 31, 39, 43, 58], respectively.

As a service to the reader and with a view towards a more complete picture we will restate some aspects which are basic for the present article but have been covered before. In order to limit the overlap to other surveys we will, however, neither elude on the tomographic reconstruction of quasicrystals (see [25]) or polyominoes (see [18]), nor on the polyatomic case (see [22, 94]) or point X-rays (see [91]). Also, we will not study general -dimensional X-rays (see [25, 103]) but concentrate on the case . This means that our exposition will be based on the X-ray transform rather than on the Radon transform (which is the case .

1.3. Organization of the present paper and its bibliography

After introducing the basic notation in Sect. 2 we will briefly survey well-known structural results related to the ill-posedness of the problems (Sect. 3) and their computational complexity (Sect. 4). Turning to recent results and applications, Sect. 5 will illustrate some quite unexpected complexity jumps in (a related basic model of) superresolution. Particular emphasis will then be placed on new developments in dynamic discrete tomography involving the movement of points over time which are only accessible by very few of their X-ray images. As Sect. 6 will show aspects of discrete tomography and particle tracking interact deeply. Another more recent issue, which comes up in materials science, is that of multi-scale tomographic imaging. Sect. 7

will indicate how different aspects of the reconstruction of polycrystalline materials based on tomographic data lead to very different techniques involving methods from the geometry of numbers, combinatorial optimization and computational geometry. Sect.

8 deals with some inner mathematical connections between discrete tomography and the Prouhet-Tarry-Escott problem from number theory, and Sect. 9 concludes with some final remarks.

Let us point out that (with the exception of some new interpretations and simple observations) the results stated here have all been published in original research papers (which are, of course, cited appropriately). Even more, since we want to use the standard notation and, in particular, a standard framework for expressing the results, some overlap with the above mentioned surveys is unavoidable.

Finally, let us close the introcuction with a comment on the bibliography. Due to the character of the present paper we included references of different kinds. Of course, we listed all original work quoted in the main body of the paper. However, we felt that for the generally interested reader it would be worthwhile to add sources for general reading. On the other hand, in terms of the included applications we focussed mainly on outlining those aspects to which discrete tomography can potentially contribute. While this is in line with the scope of the present paper, readers interested in these fields of applications may appreciate pointers to sources for additional reading. Hence we organized the bibliography in six different parts, namely general reading, papers in tomography, and further reading on particle tracking, tomographic grain mapping, macroscopic grain mapping, and the Prouhet-Tarry-Escott problem, respectively.

2. Basic notation

As pointed out before we will focus on the ‘classical’ inverse problem of reconstructing a finite point set in or from the cardinalities of its intersections with the lines parallel to a finite number of directions. There are, however, certain aspects which involve weights on the points of

. Hence we will introduce the basic notions for appropriate generalizations of characteristic functions of finite point sets, partly following

[25].

As usual, let , , and denote the sets of non-negative integers, natural numbers, integers, rationals, and reals, respectively. Further, for we will often use the notation and .

In the following, let ; denotes the dimension of the space , and  is the number of directions in which images are taken. To exclude trivial cases, we will usually assume that .

In oder to describe the objects of interest, we fix nonempty sets  and  and consider functions with finite support . In our context, the most relevant pairs of a domain and a codomain are those where or and . Other standard codomains are , and also their relaxations , , and .

For any pair , let denote the class of all functions with finite support. Of course, for , such a function can be viewed as the indicator or characteristic function of a finite set and can therefore be identified with . We will write for and identify it with the set of all finite subsets of . In particular, the case encodes the classical finite lattice sets. Since this case is particularly important we will often abbreviate by .

Further, let denote the set of all -dimensional subspaces of , while is the set of -dimensional lattice lines

, i.e., lines through the origin spanned by an integer vector. For

, we use the notation for the set of all affine lines in that are parallel to . The situation of will be referred to as the lattice case.

Now, let and . The discrete X-ray of parallel to (or, in a slight abuse of language, in the direction ) is the function defined by

Since has finite support all sums are finite. In the case of where can be identified with , we will often write . See Fig. 1 for an illustration.

Figure 1. A finite lattice set (black dots) and its 1-dimensional X-rays in the two directions  and 

The mapping on defined by is called the discrete X-ray transform of . (In typical applications only very few values of are available.)

Note that it is straightforward to extend this notation to -dimensional X-rays. Accordingly, for , we obtain the discrete Radon transform of  whose measurements come from hyperplane X-rays. We will, however, focus on the X-rays defined above, which provide -dimensional measurements. The basic task of discrete tomography is then to reconstruct an otherwise unknown function  from its X-rays with respect to a finite number  of given lines .

The X-ray information is encoded by means of data functions. In fact, the lines can be parametrized by vectors such that . Hence, one may regard as a function on . For algorithmic purposes, it is often preferable to use other representations and encode as a finite set of pairs with , and .

3. Ill-posedness

We begin with some results that deal with the basic issues of uniqueness and stability.

3.1. Uniqueness and non-uniqueness

Given a subset of , and . We say that two different sets are tomographically equivalent with respect to if for all . The pair will then also be referred to as a switching component. Further, a set is uniquely determined within by its X-rays parallel to the lines in  if there does not exist any other set in that is tomographically equivalent to with respect to . If the context is clear we will simply say that is uniquely determined.

The following classical non-uniqueness result, usually attributed to [120], has been rediscovered several times.

Theorem 1.

For any finite subset of there exist sets in that cannot be determined by -rays parallel to the lines in

Figure 2 gives an illustration of the typical construction process to obtain different lattice sets with equal X-rays.

Figure 2. Construction of different lattice sets (black and white points) with equal X-rays in (a) two, (b) three, and (c) four directions.

Note that Thm. 1 is in accordance with similar results in continuous or geometric tomography. In fact, let be finite, and let be compact and . Further let be infinitely often differentiable with support . Then there is a function with support in , infinitely often differentiable, but otherwise arbitrary on such that the continuous X-rays of and with respect to all lines in coincide; for a proof, see [128]. Also, characteristic functions of compact sets, i.e., functions with compact support are not determined by their (continuous) X-rays in any finite number of directions; see [21, Thm. 2.3.3].

While non-uniqueness is an undesirable feature for many applications it will play a positive role for applications in number theory later; see Sect. 8.

In contrast to Thm. 1, uniqueness of the reconstruction can sometimes be guaranteed when certain prior knowledge is available.

Theorem 2.

There exists a line such that every set is uniquely determined by its one X-ray .

At first glance, this result may seem surprising. However, the used a priori information is that the set is contained in . Then, indeed, the X-ray for any one line in  determines uniquely, simply because no translate of can contain more than one lattice point. As this argument shows, Thm. 2 can easily be extended to

While Thm. 2 does not seem to be of great practical use, it shows nonetheless that no matter how fine the lattice discretization might be, one X-ray suffices. In the limit, however, finitely many X-rays do in general not suffice. In this sense, discrete tomography does not behave like a discretization of continuous tomography. Note that Thm. 2 is quite different in nature from a result of [128] (see also [42, Thm. 3.148]) that for almost any finite dimensional space of objects its elements can be distinguished by a single X-ray in almost any direction. In fact, is not finite-dimensional, and moreover, any line works for any set i.e., does not depend on the set but is given beforehand.

The next result is due to Rényi [123] for , who attributes an algorithmic proof to Hajós. The generalization to was given by Heppes [108].

Theorem 3 ([123] Rényi).

Let be a finite subset of . Then every set with is uniquely determined by its X-rays parallel to the lines in .

Since the two color classes of the two-coloring of the vertices of the regular -gon in the plane are tomographically equivalent with repect to the lines parallel to its edges, Thm. 3 is best possible. A strengthening for (mildly) restricted sets of directions is given in [77]. But even more: generic directions are much better.

Theorem 4 ([122]).

There exist constants and such that for all the following holds: For almost all sets of  directions any with is uniquely determined by its X-rays parallel to the lines of .

Let us point out that in continuous tomography it is well-known that a compactly supported, infinitely differentiable function which does not contain ‘details of size ’ or smaller can be recovered reliably from its X-rays on special sets of  directions, provided that For a precise statement and proof, see [44, Thm. 2.4]. See also [41]. Such a result can be viewed, to some extent, as an analogue to Rényi’s theorem. (Note, however, that the difference to  is measured in an integral norm and hence the difference may get arbitrarily small without ever reaching . For a characterization of the null-space, see [41, 121].)

For the special case uniqueness within is characterized by the work of Ryser [52]. See Fig. 3 for an example of a set of  points in that is uniquely determined by its two X-rays in the coordinate directions. For uniqueness results from two directions in geometric tomography, see [118, 120].

Figure 3. A lattice set (black points) uniquely determined by its X-rays in horizontal and vertical directions.

In the lattice case, the specialization of Rényi’s theorem that any set is determined by any set of lattice lines is only best possible for the cardinalities . For other  the result can be improved at least by .

Theorem 5 ([69]).

Let and let be the class of sets in of cardinality less than or equal to Let with Then the sets in are determined by their X-rays parallel to the lines in

The question of smallest switching components is widely open in the lattice case.

Problem 1.

What is the smallest number such that there exist with  and two different lattice sets of cardinality that are tomographically equivalent with respect to the lines in ?

Probabilistic arguments of [69] show that, in the lattice case, switching components of a size that is polynomial in exist for each . All deterministic constructions so far lead to exponential size switching components. Several small switching components are depicted in Fig. 12.

We remark that switching components seem to have appeared first in the work of Ryser [52]. Later work on switching components includes [38, 86, 107, 117, 122, 125]. Computational investigations related to the explicit construction of switching components can be found in [82]. Switching components for other projection models are considered in [129, 130, 131, 132, 138]. Special types of switching components in the context of superresolution imaging, -convex polynomioes, and, in a more algebraic setting, are studied in [67], [73], and [117], respectively.

Quite strong uniqueness results exist for a geometrically motivated more restricted class of lattice sets. A lattice set is called convex, if

Theorem 6 ([98]).

There are such that every finite convex lattice set is uniquely determined by . Further, every set of at least seven coplanar lattice lines always suffices.

Let us point out that the ‘good’ sets of directions with respect to Thm. 6 need not have coordinates of large absolute value. Examples for are

In fact, the sets of four good lattice lines in Thm. 6 are those whose cross-ratio of their slopes does not lie in . A converse result for the more general class of hv-convex lattice sets is given in [72]. Generalizations of Thm. 6 to so-called Q-convex lattice sets and convex algebraic Delone sets (in the context of quasicrystals) can be found in [88] and [111], respectively.

As a matter of fact, the directions in Thm. 6 are all coplanar. It is not known how exactly the situation changes if we insist that the lines are in general position, i.e., each  of them span 

Problem 2.

Let . Is there a finite subset of lines in in general position such that each convex set in is uniquely determined by its X-rays parallel to the lines in  If so, what is the smallest cardinality?

Is there a smallest number  such that any set of lines has the property, that each convex set in is uniquely determined by its X-rays parallel to the lines in ?

The color classes of a -coloring of the vertices of the permutahedron in provide a lower bound on such a universal number, which grows at least quadratically in  [63]. Fig. 4 depicts the -dimensional permutahedron, which is a truncated octahedron.

Figure 4. Two-coloring of the vertices of the truncated octahedron.

An analogue to Thm. 6 also holds in the realm of geometric tomography: convex subsets of  are determined by their continuous X-rays from sets of four ‘good’ directions, see [102].

Let us point out that it is the codomain which makes the problem difficult. In fact, the case of functions in or, equivalently, lattice sets with integer weights is much simpler, as linear diophantine equations can be solved via the Hermite normal form; see e.g. [54, Sect. 4&5]). But this also implies that the study of uniqueness for functions is much easier. In fact, suppose we are given a finite set  and a bounded subset of which will act as a superset of all supports we are allowing. Then the corresponding X-ray problem with data functions all identical to  can be formulated as a homogenous system of linear diophantine equations and solved efficiently. Let be a non trivial solution, define by

for , and set

Then, of course, , , and for all . Hence, are tomographically equivalent with respect to .

The uniqueness problem for functions also permits an algebraic characterization. The subsequently stated result of Hajdu and Tijdeman [30, 107] uses the following notation. A vector is reduced if Let and  denote the vectors whose th component is and respectively. With  we abbreviate the vector of indeterminants. Accordingly, for , the monomial is denoted by .

Theorem 7 ([30, 107]).

Let let be reduced, and set . Then if, and only if, the polynomial

is divisible by

Note that the assumption that the functions are defined on rather than on  is no restriction of generality.

Let be reduced, and

A consequence of Thm. 7 is that are tomographically equivalent with respect to  if, and only if, there is a polynomial in such that

The algebraic representation by polynomials can be utilized in various ways; examples will be given in Sect. 3.2 (stability) and 8 (number theory). Additional aspects of uniqueness, in particular, concepts of additivity, are discussed in [60, 95, 104]. For uniqueness results for functions in  with several different types of bounded support, see [83] and the references cited therein.

3.2. Stability and instability

The results of the previous section were based on the assumption that the data functions are given exactly. We will now consider the case that the X-rays may contain errors.

In the following we will measure the size of a function in terms of its -norm, i.e.,

In particular, given a finite set of lines and two sets their X-ray difference will be

The first result in this section shows that at least some (however marginal) stability is present. In fact, the X-ray difference, if not , must jump to at least . This means that either two sets are tomographically equivalent or their X-ray difference grows at least linearly in .

Theorem 8 ([64]).

Let , , and with . If , then and are tomographically equivalent with respect to . The same statement holds in the lattice case.

As we will see in Thm. 12, this result is, in fact, best possible. First, we use it to give ‘noisy’ variants of some of the uniqueness results of the previous section. We begin with a stable version of Thm. 3.

Theorem 9 ([64]).

Let , , and . Further, let and , or let and . Then . The statement persists in the lattice case.

The following result is a stable version of Thm. 6.

Theorem 10 ([64]).

There are sets of cardinality for which the following is true: If are convex, and , but , then . Further, for any set of at least coplanar lattice lines, and sets as before, .

The following theorem uses the known characterization of the (rather rare) cases of uniqueness in the special case to quantify the deviation of solutions for noisy data; it generalizes a previous result from [61].

Theorem 11 ([135]).

Let with , let with . Further, suppose that is uniquely determined by for , and set . Then

Stability results in the continuous case with finitely many X-rays typically rely on bounds of the variation of the functions, measured in some weighted Sobolev norms; see [44, Sect. 4] and [42, Sect. 5.9] (and references therein). In the realm of geometric tomography, Volčič [136]

showed that the problem of reconstructing a convex body from its X-rays in four ‘good’ directions (which guarantee uniqueness) is well-posed. Some further stability estimates are given in 

[119].

In contrast to Thm. 11, the task of reconstructing finite lattice sets from X-rays taken along directions is highly instable. In particular the following result shows that Thm. 8 is sharp.

Theorem 12 ([3, 68]).

Let with , and let . Then there exist with the following properties:

  1. [label=(), itemindent=4ex]

  2. is uniquely determined by for ;

  3. is uniquely determined by for ;

  4. ;

  5. ;

  6. .

The statement also holds in the lattice case.

The proof for  is due to [68], while [3] extends the construction to general . It is actually possible to show that not even affine transformations help much to increase the overlap of the two sets.

4. Computational aspects

Next we deal with algorithmic aspects of actually reconstructing the, one or all sets that are consistent with the given X-ray data. We will restrict the exposition to functions in with and lines in since all computational issues can then be studied in the well-known

binary Turing machine model

; see [23, 47] for background information. Again, emphasis will be placed on the lattice case.

4.1. Algorithmic problems

Let be finite. From an algorithmic point of view the following questions are basic: Are the data consistent? If so, reconstruct a solution! Is this solution unique? We will now introduce the correponding problems more precisely.

Consistency.

  • Data functions for .

  • Does there exist such that for all ?

Reconstruction.

  • Data functions for .

  • Determine a function such that for all , or decide that no such function exists.

Uniqueness.

  • A function .

  • Does there exist such that for all ?

Of course, Reconstruction cannot be easier than Consistency. Further, note that Uniqueness actually asks for nonuniqueness in order to place the problem into the class ; see Thm. 13.

For certain codomains such as it is reasonable to actually ask for the number of solutions even in the case of non-uniqueness. We will introduce the following problem for geneneral with the understanding that the (not really interesting) answer ‘’ is permitted.

Consistency.

  • Data functions for .

  • Determine the cardinality of the set of functions such that for all .

Observe that a given instance can be consistent only if does not depend on . Since this condition can be checked efficiently we will in the following often tacitly assume that this is the case and set

Further, for any given instance , the support of all solutions is contained in the grid

associated with . Of course, can be computed from by solving polynomially many systems of linear equations. Hence we can associate a variable with every grid point and formulate Consistency as a linear (feasibility) problem with the additional constraints that for all . This simple observation shows already that Consistency is algorithmically easy for

simply because linear programming can be solved in polynomial time, and also for

since systems of linear diophantine equations can be solved in polynomial time; see e.g. [54, Sect. 4,5,13–15].

Next we are turning to the other relevant codomains, with a special emphasis on the lattice case.

Theorem 13 ([52, 96, 100]).

Consistency and Uniqueness are both in if  whereas they are -complete if Also, the problem Consistency is -complete for .

The complexity status of the counting problem for is still open.

Problem 3.

Is Consistency a -complete problem?

Let us now return to the Rényi setting.

Theorem 14 ([108, 123]).

Reconstruction is in  if the input is restricted to those instances with

A similar result holds for convex lattice sets when the lattice lines are chose according to Thm. 6. So, let denote the subset of of convex lattice set, and let Reconstruction signify the correponding reconstruction task.

Theorem 15 ([79, 80]).

For any set of at least seven coplanar directions and for suitable such sets of cardinality four Reconstruction can be solved in polynomial-time.

Let us now turn to the following ‘noisy’ versions of Consistency and Uniqueness

X-Ray-Correction.

  • Data functions for

  • Does there exist such that

Similar-Solution.

  • A set

  • Does there exist with and such that

Nearest-Solution.

  • Data functions for

  • Determine a set such that

Note that X-Ray-Correction can also be viewed as the task of deciding, for given data functions whether there exist ‘corrected’ data functions that are consistent and do not differ from the given functions by more than a total of . Nearest-Solution asks for a set that fits the potentially noisy measurements best.

The computational complexity of these tasks is as follows.

Theorem 16 ([64]).

The problems X-Ray-Correction, Similar-Solution, and Nearest-Solution are in for but are -complete for 

4.2. Algorithms

Several polynomial-time algorithms for Reconstruction can be found in the literature. In addition to Ryser’s algorithm [52] for there are approaches based on network-flows [127] or matroid intersections [99]. Moreover, the problem can be modeled as an integer linear program, which involves a totally unimodular coefficient matrix, and which can therefore be solved as a linear program (see, for instance, [54, Sect. 16&19]). For further comments, see [36, Sect. 1].

In the presence of -hardness, one cannot expect to find generally efficient algorithms. There are, however, various techniques from combinatorial optimization that can and have been applied to solve instances to optimality up to certain sizes; see [105]. Similarly as for the reconstruction problem Reconstruction can be formulated als integer linear program for arbitrary However, for the coefficient matrix is in general no longer totally unimodular. Of course, we can still solve the corresponding linear programming relaxation (where is replaced by or by ) efficiently. Unless Thm. 13 implies that it will, however, in general not be efficiently possible to convert the obtained fractional solution into a required integer one.

Since, in general, measured data are noisy anyway, research focused on approximate solutions. It is quite natural to try to solve Reconstruction even if by using the available polynomial-time algorithms for in an alternating approach. First, two of the given data functions are selected and a solution  is computed which is consistent with these. In the th step, at least one of the two directions is replaced by a different one from and a solution is constructed which satisfies the corresponding two constraints and is closest to . While each step of such an alternating direction approach can be performed in polynomial time, there are severe limitations on the guaranteed quality of the produced solution. For an analysis of this and other approaches, see [105].

Despite their theoretical limitations there are several approaches that are reported to work very well in practice. Among these are BART [109] and DART [76, 137]. The former, which is a variant of ART as described in [34], is implemented in the open-source software SNARK14 [116] (example code can be found in [62]), the latter is implemented in the open-source ASTRA toolbox [133]. Further algorithms are discussed in [36, Sect. 8-14] and [37, Sect. 8-11]. For applets illustrating several algorithmic tasks in discrete and geometric tomography, see [93] and [97], respectively.

Figure 5. (From [27]) An idealized circuit board.

Let us, finally, point out that for certain applications full reconstructions are not needed. For instance, in quality control for circuit board productions (see Fig. 5

) one may want to certify that the production process actually produced a desired blueprint structure (‘verification’). Then one can, of course, compute data functions from the blueprint and compare them with the measured data from the produced board. If the difference is large one would report an error. If, however, the difference is small, the produced board can still be quite different from the blueprint (particularly if the data do not determine the image uniquely). This ambiguity can be reduced by applying a (polynomial-time deterministic) reconstruction heuristic on both sets of data functions and subsequently comparing the reconstructions. In practice such checks have shown to be able to detect production flaws even on very limited data and quite poor (and very fast) reconstructions algorithms.

5. Superresolution and discrete tomography

Electron tomography, pioneered originally in the life sciences (see [20, 35, 46]), is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures [62, 71, 74, 114]. For various technical reasons, however, atomic resolution tomographic imaging as envision in [115, 124] has not become a full reality yet (favorable instances are reported in [113, 134]; see also the surveys [10, 56]). One of the challenges faced by current technology is that tomographic tilt series need to be properly aligned (see, e.g., [57, 110]). Therefore, and also to prevent radiation damage, one might wonder whether is is possible to proceed in a multimodal scheme.

Suppose some reconstruction has been obtained from a (possibly technologically less-demanding) lower-resolution data set. Can one then use limited additional high-resolution data (for instance, acquired from only two directions) to enhance the resolution in a subsequent step? As we will now see the tractability of this approach depends strongly on the reliability of the initial lower-resolution reconstruction. Details of the presented results can be found in [67].

5.1. Computational aspects

We have already remarked that a function can be viewed as a characteristic function that encodes a finite lattice set. In a different, yet equivalent, model the function  can be viewed as representing a binary image. In this interpretation the points represent the pixel/voxel coordinates while  denotes their colors (typically, values 0 and 1 are considered to represent white and black pixels, respectively); see Fig. 6 for an illustration. Similarly, for a function can be viewed as representing a gray-scale image with  different gray levels (values  and  typically representing the ‘gray level’ white and black, respectively).

Figure 6. Lattice points (left) and pixels (middle); right: pixel associated with its lattice point.

For simplicity of the exposition we restrict our discussion to the case . Now suppose we want to reconstruct a binary image contained in an  box from low-resolution gray scale information and high-resolution X-ray data. The lower resolution is quantified by some , and we assume that  and are divisible by . More precisely, we assume that an  low-resolution (gray-scale) image of is available, and the pixels  in result from a subdivision of the pixels  of Hence in any such subdivision  we have

For given and unknown we call the above equation a block constraint. We say that, for some , a block constraint is satisfied within an error of , if

We may think of as being the result of some lower-resolution reconstruction of . In order to increase the resolution we want to utilize additional high-resolution X-ray data  that are available from the two coordinate directions and , and we set . Relatively to the data , can be considered as -times finer resolution X-ray data.

For given and the task of (noisy) superresolution is as follows.

nSR.

  • A gray-level image

    a subset of ‘reliable pixels’ of and

    data functions at a -times finer resolution.

  • Determine a function such that

    for all block constraints for the pixels in  are satisfied, and all other block constraints are satisfied within an error of

    or decide that no such function exists.

Since our focus is in the following on double-resolution imaging, i.e., on the case let us set for In the reliable situation, i.e., for we simply speak of double-resolution and set (Then, of course, the set can be omitted from the input.) An illustration is given in Fig. 7.

Figure 7. (From [67]) The double-resolution imaging task DR. (a) Original (unknown) high-resolution image, (b) the corresponding low-resolution gray-scale image, (c) gray levels converted into block constraints, (d) taken in combination with double-resolution row and column sum data. The task is to reconstruct from (d) the original binary image shown in (a).

As it turns out DR is tractable.

Theorem 17 ([67]).

DR and also the corresponding uniqueness problem can be solved in polynomial time.

The algorithm presented in [67] is based on a decomposition into subproblems, which allows to treat the different gray levels separately. If we view DR as the reconstruction problem for with additionally block constraints we can compare Thm. 17 with Thm. 13 and see that bock constraints impose fewer algorithmic difficulties than X-ray data from a third direction.

The next result, which deals with the case that some of the gray levels come with small uncertainties depicts (potentially unexpected) complexity jumps.

Theorem 18 ([67]).

Let and

  1. [label=()]

  2. nSR is -hard.

  3. The problem of deciding whether a given solution of an instance of nSR has a non-unique solution is -complete.

To put it succinctly: noise in tomographic superresolution imaging does not only affect the quality of a reconstructed image but also the algorithmic tractability of the inverse problem itself.

DR without any block constraints boils down to the reconstruction problem for and is hence solvable in polynomial-time. DR is, however, -hard if several (but not all) block constraints (which are required to be satisfied with equality) are present (Thm. 18). Possibly less expectedly, if all block constraints are included, then the problem becomes polynomial-time solvable again (Thm. 17). If, on the other hand, from all block constraints some of the data come with noise at most  then the problem becomes again -hard (Thm. 18). And yet again, if from all block constraints all of the data are sufficiently noisy, then the problem is in  (as this is again the problem of reconstructing binary images from X-ray data taken from two directions). Figure 8 gives an overview of these complexity jumps.

Figure 8. (From [67]) Overview of complexity jumps for the problem of reconstructing a binary image from row and column sums and additional block constraints.

It does not seem likely that, but is still open, whether the tractability result of Thm. 17 persists for .

Problem 4 ([67]).

Is the problem nSR -hard for ?

In the realm of dynamic discrete tomography (see Sect. 6) block constraints play the role of special window constraints which can be used to encoding velocity information for moving points.

For additional information on discrete tomography problems involving other kinds of constraints, see [15, Sect. 4].

5.2. Stability and instability

Let us now turn to a discussion of the stability of the solutions to DR.

Theorem 19.

Let and . Then there exist instances  and of DR with the following properties:

  1. [label=(), itemindent=4ex]

  2. is the unique solution to 

  3. is the unique solution to 

  4. ;

The proof is based on a construction in [67] of an instance  of DR that admits precisely two solutions with From these two solutions points in one block are deleted to obtain and ; see Fig. 9 for an illustration.

A small X-ray error of  can thus lead to quite different reconstructions (again, see Fig. 9). It should be noted, however, that the set has a much larger total variation (TV) than  (for some background information see, e.g., [85]). Regularization by total variation minimization, as proposed in [67], would therefore always favor the reconstruction 

Figure 9. An example illustrating instability of DR. (a) and (b): two solutions of the same problem instance; (c) and (d): uniquely determined solutions to the two problem instances obtained by deleting points of a block from (a) and (b), respectively. (The X-rays are indicated by the numbers to the bottom and right.) The X-rays differ in the circled numbers yielding an X-ray error

It is instructive to compare DR (Thm. 19) with its discrete tomography counterparts for (Thm. 11) and (Thm. 12), which do not involve any block constraints.

On the one hand, the reconstruction problem for is much more stable than its double-resolution counterpart. In fact, an easy calculation for shows that Thm. 11 implies the bound

Thus, if the original set is uniquely determined by its X-rays, then any reconstruction  from X-rays with error  needs to coincide with  by an asymptotically much larger fraction than the  provided in Thm. 19 for DR.

On the other hand, the instability result for  (see Thm. 12) is stronger than that of Thm. 19 as for the former an X-ray error of 4 can lead to disjoint reconstructions.

Hence in terms of (in-)stabilities the block constraints seem to play a somewhat weaker role than constraints modeling data from a third direction.

6. Dynamics

Let us now turn to dynamic discrete tomography, which, in fact, represents rather recent developments in the field (see [65, 66, 141, 151]). (For dynamic aspects of computerized tomography, see, e.g., [84, 106] and the references cited therein.)

We focus here on the task of tomographic particle (or point) tracking, which amounts to determining the paths of  points in space over a period of moments in time from X-ray images taken from a fixed number  of directions.

This problem comprises, in fact, two different but coupled basic underlying tasks, the reconstruction of a finite set of points from few of their X-ray images (discrete tomography) and the identification of the points over time (tracking). The latter is closely related to topics in combinatorial optimization including matching and -assigment problems; see [16] for a comprehensive survey on assignment problems.

Let us remark that particle tracking methods have been proven useful in many different fields such as fluid mechanics, geoscience, elementary particle physics, plasma physics, combustion, and biomedical imaging [140, 144, 146, 147, 149, 151] (see also the monograph [2] and the references cited therein). Most previous tomographic particle tracking methods (such as [142, 143, 145, 150]) can be considered as particle imaging velocimetry (PIV) as they aim at capturing several statistical parameters of groups of particles instead of dealing with them individually. The individual tracking considered here is in the literature also sometimes referred to as particle tracking velocimetry (PTV) or low particle number density PIV [140]. For more general background information on particle tracking methods, see the monographs [2, 55].

The exposition in this section will partly follow [66].

6.1. Algorithmic problems

We want to focus here on the interplay between discrete tomography and tracking. Therefore, we will distinguish the cases that for none, some or all of the  moments in time, a solution of the discrete tomography task at time  is explicitly available (and is then considered the correct solution regardless whether it is uniquely determined by its X-rays). The former case will be referred to as the (partially) or (totally) tomographic case while we speak of the latter as positionally determined. It should be noted that the positionally determined case can be viewed as being the generic case in because there any two (affine) lines in general position are disjoint, hence X-ray lines meet only in the points of 

For simplicity we assume in the following that there are no particles disappearing or reappearing within the tracked time interval. When denotes the (abstract) set of particles, we are in the tracking step thus interested in a one-to-one mapping that identifies the points of with the particles. The particle tracks are then given by This identification is referred to as coupling.

In typical applications we would like to incorporate prior knowledge about ‘physically likely’ paths. It seems most natural to input such information in terms of the cost of the feasible particle tracks. Note, however, that the number of different particle tracks is , hence exponential in  and  This means that already for moderate problem sizes the costs of all potential tracks cannot be encoded explicitly. There are various ways to deal with this problem. The most general approach is based on the assumption that ‘an expert knows a good solution if she or he sees it.’ More technically speaking, it is enough for an algorithm to have access to the cost only when the particle track is considered. Accordingly, [66] suggest an oracular model, where such knowledge is available through an algorithm , called an objective function oracle, which computes for any solution  its cost in time that is polynomial in all the other input data. Then the general problem of tomographic particle tracking for can be formulated as follows.

TomTrac.

  • and data functions with for

  • Decide whether, for each , there exists a set such that for all If so, find particle tracks of minimal cost for among all couplings of all tomographic solutions

In the positionally determined case the problem TomTrac reduces to the following tracking problem, which can be viewed as a -dimensional assignment problem.

Trac.

  • and sets with .

  • Find particle tracks of minimal cost for among all couplings of the sets

A priori knowledge may be available in various ways and may then lead to different objective function oracles; see [66]. Here we focus on information that is actually explicitly available. For instance, we speak of a path value oracle if the cost is just the sum of the weights of the individual paths . Note that the number of different weights is bounded by , and can hence be encoded explicitly for fixed (and small) ; see Thm. 22. If, further, the weights are just the sums of all costs of assigning points between consecutive moments in time the objective function can be described by just , i.e., polynomially many numbers. In this case, the objective function is of Markov-type as it reflects only memoryless dependencies. Combinatorial models, which can be viewed as special choices of such parameters, are based on the knowledge that the positions of the particles in the next time step lie in certain windows. A particular such situation has be analyzed in Sect. 5. For more results on combinatorial models see [65, 66].

6.2. Algorithms and complexity

We begin with a simple tractability result for the positionally determined case.

Theorem 20 ([66]).

For Markov-type objective function oracles  the problem Trac decomposes into uncoupled minimum weight perfect bipartite matching problems and can hence be solved in polynomial time.

Although the reconstruction problem in discrete tomography for directions can be solved in polynomial time (see Thm. 13) it turns out that there are severe limitations of extending the previous result already for the following quite restricted partially tomographic case. In fact, the problem becomes hard even if there is only one time step, i.e., , and  is explicitly known while the set of particle positions for is only accessible through its two X-rays  and .

Theorem 21 ([66]).

Even if all instances are restricted to the case , where the solution is given explicitly, TomTrac for Markov-type objective function oracles  is -hard. Also the corresponding uniqueness problem is -complete and the counting problem is -complete.

Unless there is thus, in general, no efficient algorithm that provides exact solutions to every instance of TomTrac A possible remedy is to resort to heuristics, which aim at providing approximate solutions. Before we discuss such a heuristic let us state two additional intractability results, which concern the positionally determined case for non Markov-type function oracles.

Theorem 22 ([66]).

The problem Trac is -hard, even if all instances are restricted to a fixed and is a path value oracle. The -hardness persists if the objective function values provided by are all encoded explicitly.

It turns out that even if the particles are expected to move along straight lines, this a priori knowledge cannot be exploited efficiently (unless ).

Theorem 23 ([66]).

For every fixed and it is an -complete problem to decide whether a solution of Trac exists where all particles move along straight lines.

The proof of Thm. 23 given in [66] relies on the hardness of the particular variant A3ap of D-Matching established in [148]. For further results and a discussion of their practical implications see [66].

The previous complexity results show that even for and even if there is no tomography involved the coupling becomes hard unless it is of the Markov-type, i.e., it only incorporates information that relate not more than two consecutive moments in time (Thm. 22). But even for , which, of course, is of Markov-type, the problem is hard if tomography is involved at one point in time (Thm. 21). This means that there is not much room for efficient algorithms or ‘self-suggesting’ polynomial-time heuristics.

There are, however, quite involved heuristics for TomTrac