# An adaptive iterative/subdivision hybrid algorithm for curve/curve intersection

The behavior of the iterative/subdivision hybrid algorithm for curve/curve intersection proposed in [21] depends on the choice of the domain for their convergence test. Using either too small or too large test domain may cause the test to fail to detect cases where Newton's method in fact converges to a solution, which results in unnecessary additional subdivisions and consequently more computation time. We propose a modification to the algorithm to adaptively adjust the test domain size according to what happens during the test of the parent region. This is in contrast to the original algorithm whose test domain is always a fixed multiple of the input domain under consideration. Computational results show that the proposed algorithm is slightly more efficient than the original algorithm.

## Authors

• 1 publication
08/31/2021

### Embedding Ray Intersection Graphs and Global Curve Simplification

We prove that circle graphs (intersection graphs of circle chords) can b...
12/20/2018

### Intersections between the norm-trace curve and some low degree curves

In this paper we analyze the intersection between the norm-trace curve o...
01/21/2000

### Bezier Curves Intersection Using Relief Perspective

Presented paper describes the method for finding the intersection of cla...
02/25/2021

### A continuation method for computing the multilinear Pagerank

The multilinear Pagerank model [Gleich, Lim and Yu, 2015] is a tensor-ba...
06/17/2012

### The Stability of Convergence of Curve Evolutions in Vector Fields

Curve evolution is often used to solve computer vision problems. If the ...
11/22/2017

### Test Generation and Scheduling for a Hybrid BIST Considering Test Time and Power Constraint

This paper presents a novel approach for test generation and test schedu...
08/27/2019

### A simplified L-curve method as error estimator

The L-curve method is a well-known heuristic method for choosing the reg...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

We consider the problem of finding all of the intersection points between two space curves, which is typically known as curve/curve intersection (CCI). This problem is fundamental in geometric modeling and design, computational geometry, and manufacturing applications [8, 15], and has applications in robotics, collision avoidance, manufacturing simulation, and scientific visualization, for example. CCI is also a basis for solving more general intersection problems, such as finding intersections between two surfaces.

There are many different approaches to solve CCI such as subdivision methods [25, 18, 9, 16, 19], iterative methods [7]

[6], and algorithms based on orthogonal projections [12]. Srijuntongsiri [24] proposes an iterative/subdivision hybrid algorithm for CCI that use a convergence test based on Kantorovich’s theorem. The algorithm, which is named the Kantorovich-Test Subdivision algorithm for Curve/Curve Intersection (KTS-CC), subdivides the parametric domains of the problems recursively while performing two tests on the subdomains until it is certain no additional subdivisions are required. The first test uses a bounding volume of a subdomain to determine if the subdomain does not contain any solutions. The second test is the convergence test based on Kantorovich’s theorem, which tells us if Newton’s method converges quadratically for the initial point in the current subdomain as well as whether it converges at all. Once a subdomain passes the second test, KTS-CC invokes Newton’s method to locate a solution, where quadratic convergence is ensured. By appropriately combining the two tests and the subdivision scheme, the algorithm is guaranteed to find all of the intersection points between the two curves. The same idea has been used for line/surface intersection [22]

and for solving a system of polynomial equations with one degree of freedom

[23].

The efficiency of KTS-CC depends on the total number of subdomains that need to be considered. In particular, its convergence test requires choosing (or rather guessing) a test domain. Choosing either too small or too large a test domain may cause the current subdomain to fail the test although Newton’s method in fact converges quadratically to a solution, which results in unnecessary additional subdivisions. KTS-CC always chooses the test domain to be 1.5 times the size of the current subdomain.

In this article, we propose a modification to KTS-CC to adaptively adjust the size of the test domain according to what happens during the test of the parent subdomain. In other words, if it appears that the parent test fails because the test domain is too large, we decrease the size for its children. On the other hands, if it appears that the parent test fails because the domain is too small, we increase the size for its children. We describe the adaptive algorithm in details in Section 4. Experiment results comparing the efficiency of the adaptive algorithm to the original KTS-CC are in Section 6.

## 2 Curve/curve intersection problem

Curve/curve intersection (CCI) is the problem of finding all of the intersection points between two space curves. Among many conventional representations of space curves, we choose to use Bézier curves here. Specifically, let denote the Bernstein polynomials

 Zi,m(t)=m!i!(m−i)!(1−t)m−iti.

The two curves are represented by

where denote the coefficients, also known as the control points, and

where are the control points. The intersections between the two curves and are the solutions of

which is equivalent to

 f(x)≡f(u,v)≡m∑i=0n∑j=0bijZi,m(u)Zj,n(v)=0, (2)

where and . We denote and denote for better readability below. The system (2) of three polynomials in two unknowns is the one our algorithm operates on.

## 3 The theorem of Kantorovich

Denote the closed ball centered at with radius , , by

 ¯B(x,r)={y∈Rn:∥y−x∥≤r},

and denote as the interior of . Note that here refers to an arbitrary point in and is different from in the previous section. Kantorovich’s theorem in affine invariant form [4, 11] is

###### Theorem 3.1 (Kantorovich, affine invariant form [4, 11])

Let be differentiable in the open convex set . Assume that for some point , the Jacobian is invertible. Let be an upper bound

 ∥∥f′(x0)−1f(x0)∥∥≤η.

Let there be a Lipschitz constant for such that

 ∥∥f′(x0)−1(f′(x)−f′(y))∥∥≤ω⋅∥x−y∥ for all x,y∈D.

If and , where

 ρ−=1−√1−2hω,

then has a zero in . Moreover, this zero is the unique zero of in where

 ρ+=1+√1−2hω

and the Newton iterates with

 xk+1=xk−f′(xk)−1f(xk)

are well-defined, remain in , and converge to . In addition,

 ∥∥x∗−xk∥∥≤ηh((1−√1−2h)2k2k),k=0,1,2,… (3)

The point is said to be a fast starting point if the sequence of Newton iterates starting from it converges to a solution and (3) is satisfied with , which implies quadratic convergence of the iterates starting from [22]. The Kantorovich’s theorem also holds for complex functions [5].

## 4 The adaptive Kantorovich-test subdivision algorithm for curve/curve intersection

This section introduces the adaptive Kantorovich-test subdivision algorithm for curve/curve intersection algorithm (AKTS-CC). As we are interested in solutions of within the square , and the closed ball defined in the infinity norm is a square, AKTS-CC uses the infinity norm for all of its norm computation. It also maintains a list of explored regions defined as parts of the domain guaranteed by Kantorovich’s Theorem to contain only the zeros that have already been found.

Let denote the th coordinate of the point in three-dimensional space. The Kantorovich test on a square is defined as the application of Kantorovich’s theorem on the point to the functions for each using as in the statement of the theorem. For , a more easily computed upper bound , where is defined by (5) below, is used instead as the minimal is too expensive to compute. The square passes the Kantorovich test if there exists a pair satisfying and , where is the chosen test domain for . We describe our adaptive scheme for determining for each below as it involves the remaining parts of the algorithm.

If passes the Kantorovich test, is a fast starting point for for the particular satisfying the conditions of the Kantorovich test, which means Newton’s method starting from converges quadratically to a zero of . We therefore perform Newton’s method from to find and then evaluate to see if is also a zero of . If it is, we have that

 XE=¯B(x0,ρ−)∪¯B(x0,ρ+), (4)

where and are as in the statement of Kantorovich’s theorem, is an explored region associated with . Note that is defined in relation to although it is also an explored region for .

The other test used by our algorithm is the exclusion test. For a given square , let be the Bernstein polynomial that reparametrizes with the function defined by over . In other words, , where is a (uniquely determined) composition of a dilation and translation such that is bijective (Refer to [21] for an example of efficient reparametrization algorithms). The square passes the exclusion test if the convex hull of the control points of excludes the origin. It is a well-known property of the Bernstein-Bézier representation of a polynomial that lies in the convex hull of its control points, where

represents its natural parametric domain. Thus, if the hull of the coefficients of a polynomial system excludes the origin, this system has no solutions in the parametric domain. Rather than explicitly computing the convex hull, we can check whether the hull excludes the origin by solving a low-dimensional linear programming problem. Megiddo

[14] shows that low-dimensional linear programming problems can be solved in linear time (i.e., linear in the number of control points), although we have not used Megiddo’s algorithm.

If a square fails the exclusion test, we then subdivide along both axes into four equal smaller squares , , , and to be further investigated regardless of whether it passes the Kantorovich test (since there may be more than one zero in and passing the Kantorovich test guarantees converging to only one of them). We choose the test domains for the subsquares ’s depending on how does on the Kantorovich test. Specifically, suppose . Let denote the test domain for with respect to , where . If passes the test (for any ), we set for all and all pair . If fails the test for by having but , we set for all , where is a fixed constant. The rationale is that it is possible that is in fact a fast starting point but we choose too small for the test. Specifically, although increasing may increase , it may not cause to be greater than but make . On the other hand, if fails the test because , we set for all . Similarly, may be a fast starting point but we choose too large for the test (since decreasing the size of may decrease enough to satisfy ). We do not allow to be smaller than 1 as should not be smaller than (otherwise, the Kantorovich test would not be able to detect a zero near a border of ). For comparison, the original (nonadaptive) KTS-CC uses for all ’s and all ’s.

Lastly, our algorithm maintains a first-in-first-out queue of areas in together with their three test domains that still need investigation. The details of AKTS-CC are as follows.

Algorithm AKTS-CC:

• Initially, let the queue contain only the square with its three test domains . Set .

• While is nonempty,

1. Let be the first square at the front of the queue . Remove from .

2. If for all ,

• Perform the exclusion test on .

• If fails the exclusion test,

1. Perform the Kantorovich test on .

2. If passes the Kantorovich test,

1. Perform Newton’s method starting from to find a zero of , where is the pair with which passes the Kantorovich test.

2. If and for any (i.e., is a zero of and has not been found previously),

• Record as one of the intersections.

• Compute the new associated explored region according to (4).

• Set .

3. Subdivide along both axes into equal smaller squares. Add these squares to the end of together with their test domains defined as described above.

## 5 Computation of Lipschitz constants

The Kantorovich test requires the Lipschitz constant for over . This constant is typically obtained from the upper bound of the norm of

 h(x)≡g′(x)=⎡⎢ ⎢⎣∂2(f′ij(x0)−1fij)ι1(x)∂xι2∂xι3⎤⎥ ⎥⎦

for all . Let

 hι1ι2ι3(x)=∂2(f′ij(x0)−1fij)ι1(x)∂xι2∂xι3

denote the -entry of . Let be the polynomial in Bernstein basis that reparametrizes with the function defined by over . We have

 maxx∈Dij∥h(x)∥ = maxx∈[0,1]2∥∥^h(x)∥∥ = maxx∈[0,1]2max∥y∥=1∥∥^h(x)y∥∥ ≤ maxx∈[0,1]2maxι12∑ι2=12∑ι3=1|^hι1ι2ι3(x)| ≤ 4maxι1,ι2,ι3maxx∈[0,1]2∣∣^hι1ι2ι3(x)∣∣.

Note that each entry of can be written in Bernstein basis efficiently by using the identity

 dZi,n(t)dt=n(Zi−1,n−1(t)−Zi,n−1(t)).

For this reason, an upper bound of can be computed from the maximum absolute value of the control points of when written in Bernstein basis. Let denote the Lipschitz constant computed in this manner, that is,

 ^ω≡4maxι1,ι2,ι3,ζ1,ζ2∥^hι1ι2ι3,ζ1,ζ2∥, (5)

where are the control points of .

## 6 Computational results

We implement the proposed adaptive algorithm AKTS-CC in Matlab and compare its efficiency with varying values of against KTS-CC on a number of test problems. Our implementation use the reparametrization algorithm presented in [21]. The experiments are performed using tolerance of for Newton’s method parts of both algorithms. The test problems and their intersections computed by AKTS-CC are shown in Figures 1 to 8. Table 1 compares the efficiency of AKTS-CC and KTS-CC on the eight test problems. It reports the degrees of the two curves and the total number of squares examined by AKTS-CC for different values of and KTS-CC during the entire computations. Since the number of operations per While-loop iteration in AKTS-CC is only a small constant larger than in KTS-CC, the efficiency of the two algorithms depend on the total number of iterations, which is the same as the number of squares examined.

The results show that AKTS-CC is slightly more efficient than KTS-CC in three of the eight test problems and is as efficient as KTS-CC in the five remaining ones. Additionally, AKTS-CC saves just four squares in those cases. On the other hand, as AKTS-CC differ from KTS-CC only on the choices of the test domains for the Kantorovich test, which matters when the current square contains a zero, we do not expect improvement on squares not containing any zeros in any case.

## 7 Conclusion

We propose Algorithm AKTS-CC that is a modification of KTS-CC to adaptively determine the test domains for the Kantorovich test based on the result of the same test on the parent square. The test domains for , , and for the same square are determined independently. The algorithm is implemented in Matlab and is shown to be marginally more efficient than KTS-CC in some test cases and equally efficient in others. Future investigations on different adaptive schemes may result in larger improvement of the efficiency of the algorithm.

## Acknowledgement

The author gratefully acknowledges the financial support provided by Thammasat University Research Fund under the TU Research Scholar, Contract No. TP 2/24/2560.

## References

• [1] N. Andrei (2016) An adaptive conjugate gradient algorithm for large-scale unconstrained optimization. Journal of Computational and Applied Mathematics 292, pp. 83 – 91. External Links: ISSN 0377-0427, Document, Link Cited by: §1.
• [2] J. Berntsen, T.O. Espelid, and T. S. revik (1991) On the subdivision strategy in adaptive quadrature algorithms. Journal of Computational and Applied Mathematics 35 (1), pp. 119 – 132. External Links: ISSN 0377-0427, Document, Link Cited by: §1.
• [3] E.C. Carson (2018) The adaptive -step conjugate gradient method. SIAM Journal on Matrix Analysis and Applications 39 (3), pp. 1318–1338. External Links: Cited by: §1.
• [4] P. Deuflhard and G. Heindl (1980) Affine invariant convergence theorems for Newton’s method and extensions to related methods. SIAM J. Numer. Anal. 16, pp. 1–10. Cited by: Theorem 3.1, §3.
• [5] R. T. Farouki, B. K. Kuspa, C. Manni, and A. Sestini (2001) Efficient solution of the complex quadratic tridiagonal system for PH quintic splines. Numerical Algorithms 27, pp. 35–60. Cited by: §3.
• [6] R. N. Hawat and L. A. Piegl (1998) Genetic algorithm approach to curve-curve intersection. Mathematical Engineering in Industry 7 (2), pp. 269–282. Cited by: §1.
• [7] M. T. Heath (2005) Scientific computing: an introductory survey. McGraw-Hill. Cited by: §1.
• [8] J. Hoschek and D. Lasser (1993) Fundamentals of computer aided geometric design. A. K. Peters, Wellesley, MA. Note: Translated by L. L. Schumaker Cited by: §1.
• [9] E. G. Houghton, R. F. Emnett, J. D. Factor, and C. L. Sabharwal (1985) Implementation of a divide-and-conquer method for intersection of parametric surfaces. Computer Aided Geometric Design 2 (1–3), pp. 173–183. Cited by: §1.
• [10] L. Ingber (1989) Very fast simulated re-annealing. Mathematical and Computer Modelling 12 (8), pp. 967 – 973. External Links: ISSN 0895-7177, Document, Link Cited by: §1.
• [11] L. Kantorovich (1948) On Newton’s method for functional equations (Russian). Dokl. Akad. Nauk SSSR 59, pp. 1237–1240. Cited by: Theorem 3.1, §3.
• [12] A. Limaiem and F. Trochu (1995) Geometric algorithms for the intersection of curves and surfaces. Computers and Graphics 19 (3), pp. 391–403. Cited by: §1.
• [13] W.M. McKeeman (1962-12) Algorithm 145: adaptive numerical integration by simpson?s rule. Commun. ACM 5 (12), pp. 604. External Links: ISSN 0001-0782, Link, Document Cited by: §1.
• [14] N. Megiddo (1984) Linear programming in linear time when the dimension is fixed. J. ACM 31, pp. 114–127. Cited by: §4.
• [15] M. E. Mortenson (1985) Geometric modeling. John Wiley and Sons, New York. Cited by: §1.
• [16] N. M. Patrikalakis (1993) Surface-to-surface intersections. IEEE Computer Graphics and Applications 13 (1), pp. 89–95. Cited by: §1.
• [17] J.R. Rice (1975-01) A metalgorithm for adaptive quadrature. J. ACM 22 (1), pp. 61?82. External Links: ISSN 0004-5411, Link, Document Cited by: §1.
• [18] S. M. Rubin and T. Whitted (1980) A 3-dimensional representation for fast rendering of complex scenes. SIGGRAPH Comput. Graph. 14 (3), pp. 110–116. Cited by: §1.
• [19] T. W. Sederberg and T. Nishita (1990) Curve intersection using Be´zier clipping. Comput. Aided Des. 22 (9), pp. 538–549. External Links: ISSN 0010-4485, Document Cited by: §1.
• [20] N. Spillane (2016) An adaptive multipreconditioned conjugate gradient algorithm. SIAM Journal on Scientific Computing 38 (3), pp. A1896–A1918. External Links: Cited by: §1.
• [21] G. Srijuntongsiri and S. A. Vavasis (2009) Properties of polynomial bases used in a line-surface intersection algorithm. In Parallel Processing and Applied Mathematics, Cited by: §4, §6.
• [22] G. Srijuntongsiri and S.A. Vavasis (2007) A condition number analysis of a line-surface intersection algorithm. SIAM Journal on Scientific Computing 30 (2), pp. 1064–1081. Cited by: §1, §3.
• [23] G. Srijuntongsiri and S.A. Vavasis (2011) A condition number analysis of an algorithm for solving a system of polynomial equations with one degree of freedom. SIAM Journal on Scientific Computing 33 (1), pp. 433–454. External Links: Cited by: §1.
• [24] G. Srijuntongsiri (2011) An iterative/subdivision hybrid algorithm for curve/curve intersection. Visual Computer 27 (5), pp. 365–371. External Links: Cited by: An adaptive iterative/subdivision hybrid algorithm for curve/curve intersection, §1.
• [25] T. Whitted (1980) An improved illumination model for shaded display. Communications of the ACM 23 (6), pp. 343–349. Cited by: §1.