1 Introduction
The following problem arises in pattern matching: given point sets , , with and , and , find subsets and with and a transformation that matches and as closely as possible, see Figure 1.
We think of as a collection of features, or interest points of some pattern, that we want to match, bijectively, with similar features in a large image . Moreover, since the coordinate frames for and are not necessarily aligned, we want to transform to get the best possible fit.
This problem comes in many variants, depending on the class of permissible transformations and on the similarity measure for the match. Here, we want to match and in a onetoone manner, where the cost of a matching depends on the distances between matched points. Moreover, we only consider translations as permissible transformations, and write for the set
translated by a vector
. A feasible solution is given by a translation and by a matching of size (in short, a matching): a set of pairs so that any point or occurs in at most one pair. The parameter is part of the input. We consider the cost of such a solution, for some :(1) 
We will regard as a fixed constant and will omit it from the notation. Noteworthy special cases arise when (sum of distances, minimumweight Euclidean matching), (rootmeansquare matching, in short RMS matching), and (bottleneck matching). In (1), we always measure the distances by the Euclidean norm. It is not hard to extend the treatment to other norms, but we stick with Euclidean distances for simplicity.
One important special case occurs when we have a small point set (the pattern) that we want to locate within a larger set (the image), and . This problem was considered for by Rote [11] and in subsequent work [3, 8], under the name RMS partial matching. Another important instance has and slightly smaller than . Now, we want to discard a few outliers from each set, to allow for some erroneous data.
For a fixed translation vector , we define to be the cost of the minimumcost matching between and . We set to be an optimal matching from to , i.e., .
Let be the set of all matchings from into . The function is the lower envelope (i.e., the pointwise minimum) of the set of functions . The vertical projection of this lower envelope induces a planar subdivision, called the minimization diagram of . It is denoted by . Each face of is a maximal connected set of points for which is realized by the same matching . The combinatorial complexity of is the number of its faces. We refer to as the ()matching diagram of and . We are interested in two questions:
 (P1)

Compute and .
 (P2)

What is the combinatorial complexity of , and how quickly can it be computed?
These questions have been studied, , by Rote [11] and by BenAvraham et al. [3]. Two challenging, still open problems are whether the size of is polynomial in both and , and whether and can be computed in polynomial time. These previous works have raised the questions only for the case , but they are open for arbitrary . There is extensive work on pattern matching and on computing similarity between two point sets. We refer the reader to [2, 15] for surveys. Here, we confine ourselves to a brief discussion of work directly related to the problem at hand.
Much work has been done on computing a minimumcost perfect matching in geometric settings. Here, . A minimumcost perfect matching, for any norm, can be found in time [1, 9, 10].^{1}^{1}1The notation hides polylogarithmic factors in , , and also polylogarithmic factors in , when we only seek a approximate solution. These algorithms are based on the Hungarian algorithm for a minimumcost maximum matching in a bipartite graph, and are made more efficient than the general technique by using certain efficient geometric data structures. Thus, they also work when the two point sets and have different sizes, say, and , with . In this case, the running time of the algorithm is .
Approximation algorithms for the minimumweight perfect matching in geometric settings have been developed in a series of papers; see, e.g., [13] and the references therein. For the case when the weight of a matching is the sum of the Euclidean lengths of its edges, a nearlinear algorithm is known [13]. If the weight is the norm of the Euclidean lengths of the edges, for some , then the best known algorithm runs in time [12, 14]. In particular, for RMS matching () and for , the time for finding a approximate optimal matching is , and for a general , the running time is . These algorithms use the scaling method by Gabow and Tarjan [6] that at each scale computes a minimumweight matching by finding augmenting paths in phases, where each phase takes time (see also [7]). If , , and , then the augmenting paths can be found in phases, each of which takes time. Hence, the total running time in this case is , for , or , for general . When , the minimumweight matching is constructed, using the geometrically enhanced version of the Hungarian algorithm, in augmenting steps, each of which can be performed in time. That is, the exact minimumweight matching can be computed in time. The case of computing an approximate matching is somewhat trickier. If , one can show, adapting the technique in [12], that the running time remains . For smaller values of , one can still get a bound depending on , but we do not treat this case in the paper. It is also much less motivated from the point of view of applications.
Cabello et al. [4] considered optimal shape matching under translations and/or rotations. They considered the more general setting of weighted point sets, where each point of and comes with a multiplicity or “weight”. Accordingly, the similarity criterion is the earthmover’s distance, or transportation distance, which measures the minimum amount of work necessary to transport all the weight from to , where transporting a weight by distance costs . For the special case of unit weights, this reduces, via the integrality of the minimumcost flows, to onetoone matching.
We apply several ideas from Cabello et al.’s paper: (1) the use of pointtopoint translations to get constantfactor approximations, (2) the selection of a random subset of these transformations to get fast Monte Carlo algorithms, and (3) tiling the vicinity of these transformations in the parameter space by an grid to get approximations. We go beyond the results of Cabello et al. in the following aspects.

We give a greedy “diskeating” algorithm in the space of translations to get an improved deterministic approximation (Theorem 4). This idea could be useful for other problems.

We introduce approximate matching diagrams: Such a diagram is a subdivision of the translation plane together with a matching for each cell. This matching is approximately optimal for every translation in the cell. As a consequence, this diagram provides approximate optimal matchings for all translations. We show that there is an approximate matching diagram of small size, and we describe how to compute it efficiently (Section 2.1).

Less importantly, our results cover a broader class of similarity measures: The lengths of the matching edges can be aggregated in the objective function using any norm, , whereas Cabello et al. only dealt with the norm. By indentifying the crucial property that lies at the basis of the approximation, namely Lipschitz continuity (Corollary 2), this generalization comes without much additional effort. Our results are also slightly more general because we allow outliers (i.e., ), whereas Cabello et al. match the smaller set completely.

By using better data structures, some of our algorithms are more efficient.
We present approximate solutions for (P1) and (P2). They use approximation algorithms for matching between stationary sets as a black box. We write for the time that is needed to compute a approximate minimumweight matching of size between two given (stationary) sets and of and points in the plane, where the weight is the norm of the vector or Euclidean edge lengths, for and for a given . We abbreviate as simply . Table 1 summarizes the known running times.
norm  time  reference  

exact  Hungarian method, geometric version [1, 9, 10]  
approximate  [12]  
approximate  [14] 
We obtain two main results:

We present an time algorithm for computing a translation vector and a matching between and such that .

We present an time algorithm for computing a approximate matching diagram of size , i.e., a planar subdivision and a collection of matchings , one matching for each face of , such that for each face of and for every , .
The paper is organized as follows. We start with simple solutions to (P1) and (P2) with constantfactor approximations (Section 2). We then refine them to obtain approximate solutions, in Section 3. Finally, we present improved algorithms, which attain the bounds claimed in (i) and (ii), in Section 4. All our statements hold for . In some cases, the proofs require a special treatment for this case, but for brevity, we will mostly omit the treatment for . As in [4]
, the techniques used here can probably be extended to handle also rotations and rigid motions. We hope to present this extension in the full version.
2 Simple ConstantFactor Approximations
The following lemma establishes a Lipschitz condition for the cost of a matching of size .
Let be a matching of size , and let be two translation vectors. Then, for any , the cost under the norm satisfies
(2) 
Proof.
Let , and define two nonnegative dimensional vectors and by and , for . By the triangle inequality for the Euclidean norm, we have, for each , . Thus, we obtain the componentwise inequality , where denotes the dimensional vector in which all components are . Now,
using the definition (1) of , the fact that the norm is a monotone function in the components whenever they are nonnegative, and the triangle inequality for the norm. ∎
Here is an immediate corollary of Lemma 2: [Lipschitz continuity of the optimal cost] For any two translation vectors , .
Proof.
For the respective optimal matchings and between and and and ,
Approximating by pointtopoint translations.
As in [4], we consider the set of at most pointtopoint translations where some point in is moved to some point in . The following simple observation turns out to be very useful: [[4, Observation 1]] Let be an arbitrary translation vector, and let be the nearest neighbor of in . Then .
Proof.
By definition, is the translation in with . Thus, for , all summands in the definition (1) of are at least , implying . The last conclusion is trivially valid for as well. ∎
[[4, Lemma 1]] There is a translation with .
Proof.
We remark that for RMS matching (), the factor 2 can be improved to . If we measure the cost under the norm, there is a translation with .
Proof.
We need a refined version of Lemma 2 for the norm. For this, let be a matching of size , and let be two translation vectors. Define two sequences of twodimensional vectors by and , for . Since the Euclidean norm is derived from a scalar product, we have, for each ,
Now, under the Euclidean norm, this gives
(3) 
Let now be an optimal translation and a corresponding matching of size . If , then we have , and the lemma follows. Thus, assume . Let be the translated points, for .
Consider the vector . We claim that . Otherwise, by taking , for , we would get by (3)
and for small enough , the translation vector would be strictly better than , a contradiction. Hence, for every , we have by (3)
Now consider the translation for which is minimized, over . By Lemma 2, . Thus, we get
Since , the lemma follows. ∎
Lemma 2 leads to the following simple algorithm for approximating the optimum matching. Compute , and iterate over its elements. For each compute (exactly), and return the matching with the minimum weight, in time.
If we are willing to tolerate a slightly larger approximation factor, we can compute, for any and for each , a approximate matching. This approach has overall running time .
Let , with and , , and let be a size parameter. A translation vector can be computed in time, such that , where is the optimum translation. Alternatively, for any constant , one can compute a translation vector and a matching between and , in time, such that . For the case of the Euclidean norm, this can be improved to and , respectively.
2.1 An Approximate Matching Diagram
We construct a planar subdivision that approximates the matching diagram up to factor . This means that, for each face of , there is a single matching (that we provide) so that, for each , we have .
We need a lemma that relates the best matching for a given translation to the closest translation in . Let be an arbitrary translation, and let be its nearest neighbor in , i.e., the translation in that minimizes the length of . Then,
(4) 
(Recall that denotes the optimal matching for .)
Proof.
Our approximate map is simply the Voronoi diagram , where each cell , for , is associated with the optimal matching at . Correctness follows immediately from Lemma 2.1. Since the complexity of is , we have a diagram of complexity . For each point , we can either directly compute an optimal matching between and and associate the resulting map with , or use the approximation algorithm of [12]. In the former case, is a approximate matching diagram, and in the latter case it is a approximate matching diagram. We thus conclude the following: Let , with and , , and let be a size parameter. There is a approximate matching diagram of and of size , and it (and the matchings in each cell) can be computed in time. Alternatively, a approximate matching diagram, for constant , of size can be computed, using the same planar decomposition, in time.
For , there is an alternative, potentially better approximating, construction. For each , define the function , and set . We let be the minimization diagram of the functions in . Simple algebraic manipulations, similar to those for Euclidean Voronoi diagrams, show that is the minimization diagram of a set of linear functions, namely, the functions , for . The resulting map is a approximate diagram of complexity . To see this, consider a Voronoi cell in . We divide it into subcells in , each associated with some matching. All these matchings, other than , have smaller weights than the matching computed for , over their respective subcells. Note that this subdivision is only used for the analysis, the algorithm outputs the original minimization diagram. We emphasize that this construction works only for , while the Voronoi diagram applies for any .
3 Improved Approximation Algorithms
Computing a approximation of the optimum matching.
This algorithm uses the same technique that was used by Cabello et al. [4, Section 4.1, Theorem 6] in a slightly different setting. We include the description of this algorithm as a preparation for the approximate minimization diagram, and for the improved solutions in the following section.
Let be the optimum translation, as above. Our goal is to compute a translation and a matching so that .
Suppose we know the translation that minimizes the length of . By Lemma 2 and Lipschitz continuity (Corollary 2), . Using Theorem 2 with , we compute a 3approximation for , This allows us to choose some radius with . We take the disk of radius centered at , and we tile it with the vertices of a square grid of sidelength . We define as the set of vertices of all grid cells that lie in or that overlap at least partially. contains vertices.
We compute, by [12], a approximate minimumweight matching at each translation in and return the one that achieves the smallest weight. Since has distance at most from some grid vertex , we have, again by Lipschitz continuity (Corollary 2),
Since we compute a approximate matching for each grid point, the best computed matching has cost at most , assuming .
Since we do not know , we apply this procedure to all translations of , for a total of approximate matching calculations for fixed sets.
Let , , and let be a size parameter and a constant. A translation vector and a matching of size between and can be computed in time, such that .
Cabello et al. [4, Theorem 4] give an time algorithm for the weighted problem, which includes the matching problem with as a special case. It follows the same technique: it solves problems, each with a fixed translation, but each such problem takes longer than in our case because it uses the earth mover’s distance.
A approximation of .
We now construct a approximate matching diagram of and by refining . Without loss of generality, we assume that , for some natural number , and we set . We subdivide each Voronoi cell of into smaller subcells, as follows. Fix . For , let be the square of sidelength , centered at . Set . For , we partition into a uniform grid with sidelength . We clip each grid cell to , i.e., if , we take as a face of . Let be the center of the grid cell . We associate with the face . Finally, each connected component of becomes a (possibly nonconvex) face of . There are at most four such faces, and we associate with each of them.
The above procedure partitions into cells, and their total complexity is , where is the number of vertices on the boundary of . We repeat our procedure for all Voronoi cells of . Since the total complexity of is , the total complexity of is . is a approximate matching diagram of and .
Proof.
Let be an arbitrary translation vector, and let be the nearest neighbor of in , i.e., . First, consider the case when . Then and is the matching associated with the cell of containing . Hence, using Lemmas 2 and 2, we obtain
Suppose . Then . Therefore, by Corollary 2,
Let be the grid cell inside containing , and let be the center of . Then . By Corollary 2, . Furthermore,
Finally, suppose , for some . Since , we have . Let be the grid cell of containing , and let be its center. Then . Starting with the inequality that was established above, we get
Similar to the approximate matching diagram, we can improve the construction time by setting instead of and computing a approximate optimal matching (instead of the exact matching) for the center of every cell: Let , with , , and a size parameter . For , one can compute a approximate matching diagram of and , of size , in time.
4 Improved Algorithms
We now present techniques that improve, by a factor of or of , both algorithms for computing an approximate optimal matching and an approximate matching diagram. These algorithms work well for the case , and they deteriorate when becomes small. The first algorithm is based on an idea of Cabello et al. [4, Lemma 2]: The best matching contains a substantial number of edges whose length does not exceed the optimum cost by more than a constant factor (cf. Lemma 4). This gives a randomized constantfactor approximation algorithm that requires approximate matching computations between stationary sets in order to succeed with probability (Theorem 4). We proceed to an improved algorithm that computes a constantfactor approximation with the same number of fixedtranslation matching calculations deterministically. By tiling the vicinity of each candidate translation by an grid, we then obtain a approximation (Theorem 4).
Markov’s inequality bounds the number of items in a sample that are substantially above average. We will use the following consequence of it: Let be a matching of size between a (possibly translated) set and a set , with cost . Let . Then the number of pairs for which is at least .
Proof.
For , we interpret as , and the result is obvious because for all pairs . For , we argue by contradiction. The total number of pairs is . If there were more than pairs with , the total cost would be
Consider the optimal translation and the corresponding optimal matching . By the lemma, the fraction of the pairs that satisfy is at least , since . Hence, with probability at least , a randomly chosen will participate in such a “close” pair of . We do not know the with , so we try all possibilities. That is, we choose a single random point , and we try all translations , returning the minimumweight partial matching over these translations. With probability at least , we get, by Lemma 2.1, a matching whose weight is at most . The runtime of this procedure is , or if we compute at each of the above translations a approximation to . To boost the success probability, we repeat this drawing process times and obtain a approximation to the best matching, with probability at least . By setting , we get the following theorem. Let with and , , and let and be parameters. Then, a translation vector and a matching of size between and can be computed in time, such that with probability at least , for any with . ∎ If is small, the probability is approximately equal to the simpler expression .
Cabello et al. [4] proceeded from this result to a approximation by tiling the vicinity of each selected translation with an grid [4, Theorem 7]. We will first replace the randomized algorithm by a deterministic one, and apply the grid refinement afterwards.
We now describe a deterministic algorithm for approximating and the corresponding matching . At a high level, the points of are partitioned into clusters of size , and one point, not necessarily from , is chosen to represent each cluster. We will argue that the point in the resulting set of representatives that is nearest to yields a matching whose value at is an approximation of .
Here is the main idea of how we cluster the points in and construct , in an incremental manner. In step , we greedily choose the smallest disk that contains points of (or all of , if ), add the center of to , delete the points of from , and repeat. Carmi et al. [5] have described an efficient algorithm for this clustering problem. It preprocesses into a data structure (consisting of three compressed quadtrees) in time, so that in step , the disk can be computed in time and can be deleted from the data structure in time, leading to an time algorithm. They also present a faster approximation algorithm for this clustering problem: in step , instead of computing the smallest enclosing disk , they show that a disk of radius at most twice that of that still contains points of can be computed in time, and that can be deleted in time, thereby improving the overall running time to . This approximation algorithm is sufficient for our purpose. We next give a more formal description of our method:
At the beginning of step , we have a set and the current set . Initially, and . We preprocess , in time, into the data structure described by Carmi et al. [5]. We perform the following operations in step : if , the algorithm terminates. If , we compute the smallest disk containing . If , then let be the radius of the smallest disk that contains at least points of . Using the algorithm in [5], we compute a disk of radius containing at least points of . We add the center of to , and we set . We remove from the data structure, as described in [5]. Let be the set of disks computed by the above procedure. By construction, , , and . The following two lemmas establish the correctness of our method. Let be a translation vector, and let be its nearest neighbor in . Then .
Proof.
Let be the disk of radius centered at , and let . By Lemma 4 with , we have . Let be the first disk chosen by the above procedure that contains a point of , so . We must have , because the smallest disk that contains at least points of is not larger than . Hence, , and
.
Proof.
We fix a constant . We compute a approximate matching between and , for every , and choose the best among them. This will give an approximation of the minimumcost matching under translation. We can extend this algorithm to yield a approximation algorithm following the same procedure as in Section 3: We draw a disk of radius around each point of . We draw a uniform grid of cell size and look at all vertices of grid cells that overlap one of these disks at least partially. We compute a approximation for the best matching of size between and for each of the grid point under consideration, and we choose the best matching among them. Putting everything together, we obtain the following:
Let , with and , and let and be parameters. Then, a translation vector and a matching of size between and can be computed in time, such that . ∎
We show that is indeed an approximate matching diagram of and . This is analogous to Section 2.1 (Lemma 2.1). Let be a translation vector, and let be its nearest neighbor in . Then, .
Proof.
The combinatorial complexity of is . We can now construct a approximate matching diagram by refining each Voronoi cell of , as in Section 3, but the constants have to be chosen differently. The diagram has cells, and we need time per cell. We obtain the following:
Let , , and let , be parameters. There exists a approximate matching diagram of and of size , and it can be computed in time. ∎
For the case when for some constant , we can show that the bound in Theorem 4 on the size of the diagram is tight in the worst case in terms of , , and (but not of ): If is a unit grid of size and is a unit grid of size , then there are translation vectors at which and are perfectly aligned and have at least points in common. Thus, any approximate matching diagram of and needs to have distinct faces.
References
 [1] Pankaj K. Agarwal, Alon Efrat, and Micha Sharir. Vertical decomposition of shallow levels in 3dimensional arrangements and its applications. SIAM J. Comput., 29(3):912–953, 1999.

[2]
Helmut Alt and Leonidas J. Guibas.
Discrete geometric shapes: Matching, interpolation, and approximation.
In J.R. Sack and J. Urrutia, editors, Handbook of Comput. Geom., pages 121–153. Elsevier, Amsterdam, 1999.  [3] Rinat BenAvraham, Matthias Henze, Rafel Jaume, Balázs Keszegh, Orit E. Raz, Micha Sharir, and Igor Tubis. Partialmatching RMS distance under translation: Combinatorics and algorithms. Algorithmica, 80(8):2400–2421, 2018.
 [4] Sergio Cabello, Panos Giannopoulos, Christian Knauer, and Günter Rote. Matching point sets with respect to the Earth Mover’s Distance. Comput. Geom., 39(2):118–133, 2008.
 [5] Paz Carmi, Shlomi Dolev, Sariel HarPeled, Matthew J. Katz, and Michael Segal. Geographic quorum system approximations. Algorithmica, 41(4):233–244, 2005.
 [6] Harold N. Gabow and Robert Endre Tarjan. Faster scaling algorithms for network problems. SIAM J. Comput., 18(5):1013–1036, 1989.
 [7] Andrew V. Goldberg, Sagi Hed, Haim Kaplan, and Robert E. Tarjan. Minimumcost flows in unitcapacity networks. Theory Comput. Syst., 61(4):987–1010, 2017.
 [8] Matthias Henze, Rafel Jaume, and Balázs Keszegh. On the complexity of the partial leastsquares matching Voronoi diagram. In Proc. 29th European Workshop Comput. Geom. (EWCG), pages 193–196, 2013.
 [9] Haim Kaplan, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, and Micha Sharir. Dynamic planar Voronoi diagrams for general distance functions and their algorithmic applications. In Proc. 28th Annu. ACMSIAM Sympos. Discrete Algorithms (SODA), pages 2495–2504, 2017.
 [10] Jeff M. Phillips and Pankaj K. Agarwal. On bipartite matching under the RMS distance. In Proc. 18th Canad. Conf. Comput. Geom. (CCCG), pages 143–146, 2006.
 [11] Günter Rote. Partial leastsquares point matching under translations. In Proc. 26th European Workshop Comput. Geom. (EWCG), pages 249–251, 2010.
 [12] R. Sharathkumar and Pankaj K. Agarwal. Algorithms for the transportation problem in geometric settings. In Proc. 23rd Annu. ACMSIAM Sympos. Discrete Algorithms (SODA), pages 306–317, 2012.
 [13] R. Sharathkumar and Pankaj K. Agarwal. A nearlinear time approximation algorithm for geometric bipartite matching. In Proc. 44th Annu. ACM Sympos. Theory Comput. (STOC), pages 385–394, 2012.
 [14] Kasturi R. Varadarajan and Pankaj K. Agarwal. Approximation algorithms for bipartite and nonbipartite matching in the plane. In Proc. 10th Annu. ACMSIAM Sympos. Discrete Algorithms (SODA), pages 805–814, 1999.
 [15] Remco C. Veltkamp. Shape matching: Similarity measures and algorithms. In Proc. Intl. Conf. Shape Modeling and Applications, pages 188–197, 2001.
Comments
There are no comments yet.