1 Introduction
Given a convex polygon with edges, the minimum allflush gon problem asks for the minimum (with respect to area throughout this paper) gon whose edges are all flushed with edges of (i.e. each edge must contain an edge of as a subregion). In other words, it asks for the minimum gon that circumscribes touching edgetoedge.
This problem was proposed by Aggarwal et al. [3]. They solved it in time using their technique for computing the minimum weight link path. For , this improves over an time algorithm based on the matrixsearch technique [2] (however, is better when is regarded as a constant as discussed below), which improves over an time algorithm implicitly given in [5]. All these previous work follow the same approach: first, choose an arbitrary edge of and find the minimum allflush gon flushed by , which reduces to solving an instance of the minimum weight link path problem; second, compute the minimum allflush gon from . The term comes from the first step and comes from the second. Schieber [18] slightly improves the first term by optimizing the underlying technique for computing the minimum weight link path. Yet the second term is unchanged. To the best of our knowledge, no one has improved this part even for the simplest case of .
However, for , we can solve the dual problem, i.e. computing the maximum area triangle (MAT) inside a convex polygon, in time [11, 7]. So, it is interesting to know whether we can also compute in linear time the minimum allflush triangle (MFT).
We settle this question affirmatively in this paper by improving the aforementioned second term to for . In our algorithm, after computing , we first compute another triangle from and then compute the MFT from , both in linear time. (However, note that computing and will be put together in this paper and referred to as the initial step of our algorithm. The main difficulty lies in computing the MFT from .)
The MFT problem is as fundamental as the minimum enclosing triangle problem studied in history [14, 16, 7] and may find similar applications in more realistic problems. By computing the MFT, we obtain a simple container of to accelerate the polygon collision detection. Moreover, it can be applied in finding a good packing of into the plane. In the packing problem, we want to pack nonoverlap copies of in the plane, so that the ratio between the uncovered area and the area covered by the copies is as small as possible.
Literature of the “dual” problem. For the MAT problem, there is a wellknown linear time algorithm given by Dobkin and Snyder [9], which was found incorrect by Keikha et al. [13] recently. Nonetheless, there is a correct but more involved linear solution given by Chandran and Mount [7] based on the rotatingcaliper technique [19]. Jin [11] recently reported another linear time algorithm, which is much simpler than the one in [7]. Another algorithm was reported by Kallus [12]. Although, the MFT and MAT problems are often viewed as dual (from the combinatorial perspective) [3, 18], to our knowledge, there is no reduction from an instance of the MFT problem to an instance of the MAT problem that allows us to translate an algorithm of the latter to the former. See a discussion in appendix B.
RotateandKill technique. Jin [11] introduced a socalled RotateandKill technique for solving the polygon inclusion problem, which will be applied in this paper for finding the MFT. So, let us briefly review how this technique is applied on the MAT problem.
Consider a naïve algorithm for finding the MAT: enumerate a vertex pair of and computes the vertex so that the area of is maximum. It suffers from enumerating too many pairs of . In fact, only a few of these pairs are effective as implied by the following iterative process called RotateandKill. Let denote the clockwise next vertex of . Jin [11] designed a constant time subroutine , called killing criterion, which returns either or , so that is returned only if (a) no pair in forms an edge of an MAT and is returned only if (b) no pair in forms an edge of an MAT. Now, assume a pair is given in the current iteration. We kill if and otherwise kill , and then move on to the next iteration or . In this way, only pairs of are enumerated and the algorithm is thus improved to time.
In addition to the MAT, [11] also computes the minimum enclosing triangle optimally by this new technique. Naturally, [11] guesses that their technique is powerful for solving other related problems. A precondition for applying the RotateandKill technique is that at least one of (a) and (b) holds at any iteration. This is indeed truth for many polygon inclusion problems since the locally optimal solutions in such problems usually admit an interleaving property (see [5] or Definition 1.1 below) implying that (a) and (b) cannot fail simultaneously.
When the above precondition is satisfied for a given problem, the biggest challenge in applying the technique is that we need an efficient killing criterion specialized to the problem. Usually, a criterion that runs in or even time is easy to find. Yet we wish to have an (amortized) time criterion as shown in [11]. For the MFT problem, although we can borrow the framework in [11], we must settle this main challenge by developing new ideas. In fact, our criterion is more tricky than the one in [11].
Other related work. Searching for extremal shapes with special properties enclosing or enclosed by a given polygon were initiated in [9, 5, 8], and have since been studied extensively. Chandran and Mount’s algorithm [7] is an extension of O’Rourke et al.’s linear time algorithm [16] for computing the minimum triangle enclosing . The latter is an improvement over an algorithm of Klee and Laskowski [14]. The minimum perimeter enclosing triangle can be solved in time [4]. The maximum perimeter enclosed triangle can be solved in time [5]. [5, 2, 3, 18, 8, 1, 15] studied extremal area / perimeter gon inside or outside a convex polygon. In particular, the maximum gon can be computed in time when is a constant [2, 3, 18] and it remains open whether this can be optimized to linear time (at least for ). [21, 20] studied the extremal polytope problems in three dimensional space. Brass and Na [6] solves another related problem: Given halfplanes (in arbitrary position), find the maximum bounded intersection of halfplanes out of them. We refer the readers to the introduction of [10] and [11] for more related work.
Key motivation. The wellknown rotatingcaliper technique is powerful in solving a lot of polygon enclosing problems, but not easy to apply in most polygonal inclusion problems. To our knowledge, there was no generic technique for solving the polygon inclusion problem as claimed in [17] before the RotateandKill technique (noticing that [9] is wrong). Thus, for attacking the inclusion problems, there is a necessity to further develop the unmature RotateandKill technique, especially by finding more of its applications. This motivates us to study the MFT problem in this paper (even though it is actually a polygon enclosing problem). Nonetheless, we believe that our result brings some new understanding of the technique that might be helpful for improving other related problems.
1.1 Preliminaries
Let be a clockwise enumeration of the vertices of the given convex polygon . For each , denote by the directed line segment . We call the edges of . Assume that no three vertices of lie in the same line and moreover, all edges of are pairwisenonparallel. Let denote the extended line of , and denote the halfplane delimited by and containing , and denote the complementary halfplane of .
When three distinct edges lie in clockwise order, the region bounded by is denoted by and is called an allflush triangle. Throughout, whenever we write , we assume that are distinct and lie in clockwise order.
Denote the area of by . This area may be unbounded. We can use the following observation to determine the finiteness of .
[Chasing relation] Edge is chasing another edge , denoted by , if the intersection of lies between clockwise.
Observation .
is finite if and only if: and .
Observation .
There exists a tuple such that and .
Proof.
Choose arbitrarily. Choose so that but . Let . ∎
For the allflush triangles with finite areas, we can define the notion of 3stable. (Note that finiteness is a prerequisite of being 3stable because otherwise subsequent lemmas, e.g. Lemma 1.1, would fail or be too complicated to state; see discussions in Appendix B.)
Consider any allflush triangle with a finite area. Edge is stable if no allflush triangle is smaller than ; edge is stable if no allflush triangle is smaller than ; and edge is stable if no allflush triangle is smaller than . Moreover, triangle is 3stable if are all stable.
Combining Observation 1.1 and 1.1, there exist allflush triangles with finite areas. Moreover, by Definition 1.1, if a finite allflush triangle is not 3stable, we could find a smaller such triangle. Therefore, to find the minimum area allflush triangle, it suffices if we first compute all the 3stable triangles and then select the minimum among them.
Below we introduce the notion of interleaving and an important property of 3stable triangles, whose corollary shows that there are not too many such triangles.
Two flushed triangles and are interleaving if there is a list of edges which lie in clockwise order (in a nonstrict manner; so neighbors may be identical), in which equals and equals .
Any two 3stable triangles are interleaving.
There are 3stable triangles.
1.2 Overview of our approach
Initial step. We first compute one 3stable triangle by a somewhat trivial algorithm. Denote the resulting 3stable triangle by . Let and , where denotes the set of edges between to clockwise including and .
The naïve approach. For each 3stable triangle , since it must interleave (by Lemma 1.1), we can assume without loss of generality that and . Therefore, the following algorithm computes all the 3stable triangles: Enumerate and for each such edge pair, compute the 3stable triangle(s) with . However, this algorithm costs time, which is in worst (and most) cases.
We say is dead if there does not exist an edge such that is 3stable. Clearly, it is unnecessary to enumerate a dead pair in the above algorithm. Further, there are only pairs that are not dead according to Corollary 1.1. Therefore, the above algorithm could be improved if those pairs that are not dead can be found efficiently.
RotateandKill. Initially, set , i.e. set to be the first edges in respectively. Iteratively, choose one of the following operations:
Kill (i.e. ); or kill (i.e. ). Obey the following rules.
is killed only if (1) the pairs in are all dead, and
is killed only if (2) the pairs in are all dead.
The termination condition is , i.e. are the last edges in respectively.
Suppose both rules are obeyed, the iteration would eventually reach the state
and at that moment
all the pairs that are not dead would have been enumerated. To see this more clearly, observe that is not dead (because is 3stable), and observe that by induction, at each iteration of , an pair that is not dead either has been enumerated already or satisfies that and .The above RotateandKill process shall be finalized with a function , called killing criterion, which guides us to kill or . It returns only if (1) holds and only if (2) holds. Above all, notice that such a criterion does exist. This is because (1) or (2) holds at each iteration. Suppose neither (1) nor (2) in some iteration and without loss of generality that and are not dead. This suggests two 3stable triangles and , which definitely cannot be interleaving and thus contradicts Lemma 1.1.
The criterion is the kernel of the algorithm; designing it is the crucial part of the paper.
Logarithmic killing criterion. Two criterions obey the rules: Return when (1) holds and otherwise. Or, return when (2) holds or otherwise. Yet they are not computationally efficient. Computing (1) (or (2)) costs time by trivial methods, or time by binary searches (see Appendix C). These can only lead to or time solutions.
Amortized constant time killing criterion. We design an amortized time killing criterion in Section 3. Briefly, given , we compute a specific directed line (in time) and compare it with . Then, return or depending on whether lies on the right of . We make sure that the slope of monotonously increase throughout the entire algorithm, thus it only costs amortized time to compare the convex polygon with .
Compute the 3stable triangle(s). It remains to specify how we compute the 3stable triangle with in (amortized) constant time. We first compute so that is minimum (see Definition 2 below for a rigorous definition of ), and then check whether is 3stable and report it if so. We apply two basic lemmas here. The unimodality of for fixed (Lemma 1) states that if is enumerated clockwise along the interval of edges for which is allflush and is finite, this area would first decrease and then increase. The bimonotonicity of (Lemma 1) states that if or moves clockwise along the boundary of , so does . Because move clockwise during the RotateandKill process, moves clockwise by the bimonotonicity and thus can be computed using the unimodality in amortized time. Checking 3stability (not necessary) reduces to checking whether are stable in this triangle, which only takes time also by the unimodality.
A pseudo code of our main algorithm is given in Appendix B.
2 Compute one 3stable triangle
To present our algorithm, we first give two basic lemmas mentioned in the last paragraph of Subsection 1.2. Their easy proofs are put into Appendix A due to space limit.
For each , let denote the vertex with the furthest distance to . Given points on the boundary of , we denote by the boundary portion of that starts from and clockwise to which does not contain endpoints .
Consider any edge pair such that . Notice that “ and ” is equivalent to “”. We define to be the smallest (i.e. clockwise first) such that For the special case where , are infinite for all by Observation 1.1 and we define to be the previous edge of . See Figure 1 below.
Sometimes we adopt the convention to abbreviate as . Hence denotes .
[Unimodality of for fixed ] Given so that and , function is unimodal for . Specifically, this function strictly decreases when is enumerated clockwise from the next edge of to ; and for , we have ; and it strictly increases when is enumerated clockwise from the next edge of to the previous edge of .
[Bimonotonicity of ] Let denote in the following claims.

Assume is chasing , so are defined. Notice that these two edges lie in according to Definition 2. We claim that lie in clockwise order in .

Assume are chasing , so are defined. Notice that these two edges lie in according to Definition 2. We claim that lie in clockwise order in .
Here, “lie in clockwise order” is in a nonstrict manner; which means equal is allowed.
To find a 3stable triangle, our first goal is to find a triangle with two stable edges. We find it as follows. Assign , enumerate an edge clockwise and compute for each , and then select so that is minimum. In other words, we compute so that is the smallest allflush triangle with . Using the bimonotonicity of (Lemma 1) with the unimodality of for fixed (Lemma 1), the computation of only costs amortized time, hence the entire running time is .
We claim that has a finite area and moreover, are stable in . By Observation 1.1 and the proof of Observation 1.1, for any given edge , there exist so that has a finite area. This easily implies the finiteness of . If (or ) is not stable in , we could get a smaller triangle (or ) with , contradicting the fact that is the smallest allflush triangle with .
Now, and are stable in . If is also stable (which can be determined in time by Lemma 1), we have found a 3stable triangle. What if is not stable? By Lemma 1, this means either or . Assume the former occurs and our subroutine for this case is given in Algorithm 1. The latter can be handled by a symmetric subroutine as shown in Algorithm 3.
To analysis Algorithm 1, we introduce two notions: backstable and forwstable. Consider any allflush triangle with a finite area. Edge is backstable if (or ). Edge is forwstable if (or ). Symmetrically, we can define backstable and forwstable for and .
Note that backstable plus forwstable means stable. This applies Lemma 1.
Observation .
Observation .
Throughout Algorithm 1, the following hold.

has a finite area, which strictly decreases after every change of .

Edges are backstable in .

Edges are forwstable after the repeatuntil sentence (Line 3 to Line 6), and is forwstable when the algorithm terminates.
Proof.
Part 1 is obvious. Part 3 is easy: Whenever one of is not forwstable, the algorithm moves it forwardly. We prove part 2 in the following. Initially, , so by Lemma 1, i.e. is backstable. When is to be executed at Line 3, . This means will be backstable after this sentence. Furthermore, by Observation 2, a backstable edge remains backstable when we move another edge forwardly, so remains backstable when or is increased. By some similar arguments, are always backstable. Notice that initially are backstable since they are stable (guaranteed by the previous step). ∎
Algorithm 1 terminates eventually according to part 1 of Observation 2. Moreover, is 3stable at the end. This follows from the other two parts of Observation 2.
Finally, observe that can never return to according to the fact that the initial triangle is the smallest one with . Moreover, observe that can only move clockwise and always lie in clockwise order. These together imply that the total number of changes of is bounded by and hence Algorithm 1 runs in time. ^{1}^{1}1 Although Algorithm 1 looks similar to the kernel step in [9] (by coincidence), our entire algorithm is essentially different from that in [9]. Most importantly, our first step for finding the “2stable” triangle sets the initial value of differently. In addition, our algorithm has an omitted subroutine symmetric to Algorithm 1 which handles the case where is forwstable but not backstable, but [9] does not. Unfortunately, some previous reviewers irresponsibly regarded our algorithm in this section the same as the algorithm in [9] and claimed that this part of algorithm is not original.
3 Compute all the 3stable triangles in time
Recall the framework of our algorithm in Subsection 1.2. This section presents the kernel of our algorithm — the killing criterion. First, we give some observations and a lemma.
See Figure 5. Given rays originating at and a hyperbola branch admitting as asymptotes. Construct an arbitrary tangent line of and assume that it intersects at points respectively. From basic knowledge of hyperbolas, the area of is a constant. This area is defined as the trianglearea of , denoted by .
Observation .
Let be the same as above and be the quadrant region bounded by and containing . Consider any halfplane which contains and is delimited by .

The area of is smaller than if and only if is disjoint with .

The area of is identical to if and only if is tangent to .

The area of is larger than if and only if cuts (i.e. is a secant of ).
Observation 3 is trivial; proof omitted. Recall that is the halfplane delimited by and containing for each . Let and denote two quadrant regions divided by and . (Subscripts are taken modulo in all such places.)
Consider any vertex . See Figure 5.

For every such that and , define to be the hyperbola branch asymptotic to in with trianglearea as much as the area of .

For every such that and , define to be the hyperbola branch asymptotic to in with trianglearea as much as the area of .
For convenience, we use two abbreviations in the following. Let “intersect” be short for “cut or be tangent to” and “avoid” be short for “be disjoint with or tangent to”.
Observation .
Proof.
Assume . Because is stable, . So the area of is at least the area of . Namely, the former area is no smaller than . Applying Observation 3, this means intersects .
Assume . This implies that ; otherwise and is infinite. Because and is stable, . Therefore, the area of is at most the area of . In other words, the former area is no larger than . This means avoids by Observation 3.
Symmetrically, because is stable, we can prove claim 2. ∎
Recall the 3stable triangle and the conditions (1) and (2) in Subsection 1.2. Observation 3 represents “stable” by linehyperbola intersection conditions. The following lemma provides sufficient conditions of (1) and (2) in guise of linehyperbola intersections.
Assume , , are distinct and . Note that , , and are defined; see Figure 6.


When some edge pair is not dead, and hence there exists so that is 3stable,
must (i) intersect both and and (ii) belongs to . 
If (I) no line in intersects both and , we can infer that are all dead, namely, (1) holds.
To be clear, throughout this paper, is empty when .



When some edge pair is not dead, and hence there exists so that is 3stable,
must (i) avoid both and and (ii) belongs to . 
If (II) no line in avoids both and , we can infer that are all dead, namely, (2) holds.
To be clear, throughout this paper, is empty when .

Proof.
1.b and 2.b are the contrapositives of 1.a and 2.a; so we only prove 1.a and 2.a. See Figure 6 (a) and Figure 6 (b) for the illustrations of the proofs of 1.a and 2.a respectively.
Proof of 1.a(i). Because and is stable in , by the unimodality in Lemma 1, . Equivalently, the area of is at least . Applying Observation 3, this means intersects .
Applying Observation 3.1 on , line intersects . Moreover, is clearly contained in the area bounded by . Therefore, intersects .
Proof of 2.a(ii). Because is defined, , which implies (ii).
Proof of 2.a(i). Because and is stable in , by the unimodality in Lemma 1, . Equivalently, the area of is at most . Applying Observation 3, this means avoids .
Applying Observation 3.2 on , line avoids . Moreover, the area bounded by clearly contains . Therefore, avoids .
Proof of 2.a(ii). Because is defined, . Since this triangle is 3stable, . However, edges in are chasing , so they do not contain . So, . Because , we have . However, because is 3stable. So, . Altogether, , i.e. (ii) holds. ∎
To design a killing criterion as mentioned in Subsection 1.2, we are looking for a condition such that first it is easy to compute, and second itself and its negative implies (1) and (2) respectively. In Lemma 3, we give two sufficient conditions of (1) and (2), which are (I) and (II) respectively, and thus reduce the problem to find an easytocompute condition who and whose negative imply (I) and (II). We design such a condition (X) in the next lemma.
The assumption of henceforth follows Lemma 3 unless otherwise stated.
Notation. Let denote for short. Denote by the common tangent of and , and denote the other three common tangents by and correspondingly; see Figure 8 and 8. Omit subscripts when they are clear in context. Assume these four common tangents are directed; the direction of such a tangent is from its intersection with to its intersection with .
See Figure 8. Choose an arbitrary directed line going from a point in (open) segment to a point in (open) segment . If (X) lies on the right of , we have (I): no line intersects both and . Otherwise, we have (II): no line avoids both and .
Proof.
We state two crucial observations.

If (I) fails, a point of lies in or on the left of .

If (II) fails, all points of lie in or on the right of .
Proof of (i). Assume (I) fails, so there is that intersects . This means that a point in (and hence in ) lies in or on the left of
Comments
There are no comments yet.