 # Optimality Bounds for a Variational Relaxation of the Image Partitioning Problem

We consider a variational convex relaxation of a class of optimal partitioning and multiclass labeling problems, which has recently proven quite successful and can be seen as a continuous analogue of Linear Programming (LP) relaxation methods for finite-dimensional problems. While for the latter case several optimality bounds are known, to our knowledge no such bounds exist in the continuous setting. We provide such a bound by analyzing a probabilistic rounding method, showing that it is possible to obtain an integral solution of the original partitioning problem from a solution of the relaxed problem with an a priori upper bound on the objective, ensuring the quality of the result from the viewpoint of optimization. The approach has a natural interpretation as an approximate, multiclass variant of the celebrated coarea formula.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction and Background

### 1.1 Convex Relaxations of Partitioning Problems

In this work, we will be concerned with a class of variational problems used in image processing and analysis for formulating multiclass image partitioning problems, which are of the form

 infu∈CEf(u) := ∫Ω⟨u(x),s(x)⟩dx+∫ΩdΨ(Du), (1) CE := BV(Ω,E) (2) = {u∈BV(Ω)l|u(x)∈Efor a.e. \leavevmode\nobreak\ x∈Ω}, (3) E := {e1,…,el}. (4)

The labeling function assigns to each point in the image domain a label , which is represented by one of the

-dimensional unit vectors

. Since it is piecewise constant and therefore cannot be assumed to be differentiable, the problem is formulated as a free discontinuity problem in the space of functions of bounded variation; we refer to Ambrosio2000 for a general overview.

The objective function consists of a data term and a regularizer. The data term is given in terms of the function , and assigns to the choice the “penalty” , in the sense that

 ∫Ω⟨u(x),s(x)⟩dx = l∑i=1∫Ωisi(x)dx, (5)

where is the class region for label , i.e., the set of points that are assigned the -th label. The data term generally depends on the input data – such as color values of a recorded image, depth measurements, or other features – and promotes a good fit of the minimizer to the input data. While it is purely local, there are no further restrictions such as continuity, convexity etc., therefore it covers many interesting applications such as segmentation, multi-view reconstruction, stitching, and inpainting Paragios2006 .

### 1.2 Convex Regularizers

The regularizer is defined by the positively homogeneous, continuous and convex function acting on the distributional derivative of , and incorporates additional prior knowledge about the “typical” appearance of the desired output. For piecewise constant , it can be seen that the definition in (1) amounts to a weighted penalization of the discontinuities of :

 ∫ΩdΨ(Du)= (6) ∫JuΨ(νu(x)(u+(x)−u−(x))⊤)dHd−1(x),

where is the jump set of , i.e., the set of points where has well-defined right-hand and left-hand limits and and (in an infinitesimal sense) jumps between the values

across a hyperplane with normal

, (see Ambrosio2000 for the precise definitions).

A particular case is to set , i.e., the scaled Frobenius norm. In this case is just the (scaled) total variation of , and, since and assume values in and cannot be equal on the jump set , it holds that

 J(u) = 1√2∫Ju∥u+(x)−u−(x)∥2dHd−1(x), (7) = Hd−1(Ju). (8)

Therefore, for the regularizer just amounts to penalizing the total length of the interfaces between class regions as measured by the -dimensional Hausdorff measure , which is known as uniform metric or Potts regularizer.

A general regularizer was proposed in Lellmann2011 , based on Chambolle2008 : Given a metric (distance) (not to be confused with the ambient space dimension), define

 Ψd(z=(z1,…,zl)):=supv∈Ddloc⟨z,v⟩, (9) Ddloc:={(v1,…,vl)∈Rd×l|… (10) ∥vi−vj∥2⩽d(i,j)∀i,j∈{1,…,l},… l∑k=1vk=0}.

It was then shown that

 Ψd(ν(ej−ei)⊤) = d(i,j), (11)

therefore in view of (1.2) the corresponding regularizer is non-uniform: the boundary between the class regions and is penalized by its length, multiplied by the weight depending on the labels of both regions.

However, even for the comparably simple regularizer (7), the model (1) is a (spatially continuous) combinatorial problem due to the integral nature of the constraint set , therefore optimization is nontrivial. In the context of multiclass image partitioning, a first approach can be found in Lysaker2006 , where the problem was posed in a level set-formulation in terms of a labeling function , which is subsequently relaxed to . Then is replaced by polynomials in , which coincide with the indicator functions for the case where assumes integral values. However, the numerical approach involves several nonlinearities and requires to solve a sequence of nontrivial subproblems.

In contrast, representation (1) directly suggests a more straightforward relaxation to a convex problem: replace by its convex hull, which is just the unit simplex in dimensions,

 Δl := conv{e1,…,el} = {a∈Rl|a⩾0,l∑i=1ai=1},

and solve the relaxed problem

 infu∈C f(u) , (13) C := BV(Ω,Δl) (14) = {u∈BV(Ω)l|u(x)∈Δl for a.e.\leavevmode\nobreak\ x∈Ω}. (15)

Sparked by a series of papers Zach2008 ; Chambolle2008 ; Lellmann2009 , recently there has been much interest in problems of this form, since they – although generally nonsmooth – are convex and therefore can be solved to global optimality, e.g., using primal-dual techniques. The approach has proven useful for a wide range of applications Kolev2009 ; Goldstein2009a ; Delaunoy2009 ; Yuan2010 .

### 1.3 Finite-Dimensional vs. Continuous Approaches

Many of these applications have been tackled before in a finite-dimensional setting, where they can be formulated as combinatorial problems on a grid graph, and solved using combinatorial optimization methods such as -expansion and related integer linear programming (ILP) methods Boykov2001 ; Komodakis2007 . These methods have been shown to yield an integral labeling with the a priori bound

 f(u′) ⩽ 2maxi≠jd(i,j)mini≠jd(i,j)f(u∗E), (16)

where is the (unknown) solution of the integral problem (1). They therefore permit to compute a suboptimal solution to the – originally NP-hard Boykov2001 – combinatorial problem with an upper bound on the objective. No such bound is yet available for methods based on the spatially continuous problem (13).

Despite these strong theoretical and practical results available for the finite-dimensional combinatorial energies, the function-based, spatially continuous formulation (1) has several unique advantages:

• The energy (1) is truly isotropic, in the sense that for a proper choice of it is invariant under rotation of the coordinate system. Pursuing finite-dimensional “discretize-first” approaches generally introduces artifacts due to the inherent anisotropy, which can only be avoided by increasing the neighborhood size, thereby reducing sparsity and severely slowing down the graph cut-based methods.

In contrast, properly discretizing the relaxed problem (13) and solving it as a convex problem with subsequent thresholding yields much better results without compromising sparsity (Fig. 1 and 2, Klodt2008 ) Figure 1: Segmentation of an image into 12 classes using a combinatorial method. Left: Input image, Right: Result obtained by solving a combinatorial discretized problem with 4-neighborhood. The bottom row shows detailed views of the marked parts of the image. The minimizer of the combinatorial problem exhibits blocky artifacts caused by the choice of discretization.

. Figure 2: Segmentation obtained by solving a finite-differences discretization of the relaxed spatially continuous problem. Left: Non-integral solution obtained as a minimizer of the discretized relaxed problem. Right: Integral labeling obtained by rounding the fractional labels in the solution of the relaxed problem to the nearest integral label. The rounded result contains almost no structural artifacts.

This can be attributed to the fact that solving the discretized problem as a combinatorial problem in effect discards much of the information about the problem structure that is contained in the nonlinear terms of the discretized objective.

• Present combinatorial optimization methods Boykov2001 ; Komodakis2007 are inherently sequential and difficult to parallelize. On the other hand, parallelizing primal-dual methods for solving the relaxed problem (13) is straight-forward, and GPU implementations have been shown to outperform state-of-the-art graph cut methods Zach2008 .

• Analyzing the problem in a fully functional-analytic setting gives valuable insight into the problem structure, and is of theoretical interest in itself.

### 1.4 Optimality Bounds

However, one possible drawback of the spatially continuous approach is that the solution of the relaxed problem (13) may assume fractional values, i.e., values in . Therefore, in applications that require a true partition of , some rounding process is needed in order to generate an integral labeling . This may increase the objective, and lead to a suboptimal solution of the original problem (1).

The regularizer as defined in (9) enjoys the property that it majorizes all other regularizers that can be written in integral form and satisfy (11). Therefore it is in a sense “optimal”, since it introduces as few fractional solutions as possible. In practice, this forces solutions of the relaxed problem to assume integral values in most points, and rounding is in practice only required in small regions.

However, the rounding step may still increase the objective and generate suboptimal integral solutions. Therefore the question arises whether this approach allows to recover “good” integral solutions of the original problem (1).

In the following, we are concerned with the question whether it is possible to obtain, using the convex relaxation (13), integral solutions with an upper bound on the objective. Specifically, we focus on inequalities of the form

 f(¯u∗) ⩽ (1+ε)f(u∗E) (17)

for some constant , which provide an upper bound on the objective of the rounded integral solution with respect to the objective of the (unknown) optimal integral solution of (1). Note that generally it is not possible to show that (17) holds for any . The reverse inequality

 f(u∗E)⩽f(¯u∗) (18)

always holds since and is an optimal integral solution. The specific form (17) can be attributed to the alternative interpretation

 f(¯u∗)−f(u∗E)f(u∗E) ⩽ ε, (19)

which provides a bound for the relative gap to the optimal objective of the combinatorial problem. Such can be obtained a posteriori by actually computing (or approximating)  and a dual feasible point: Assume that a feasible primal-dual pair is known, where approximates , and assume that some integral feasible has been obtained from by a rounding process. Then the pair is feasible as well since , and we obtain an a posteriori optimality bound of the form (19) with respect to the optimal integral solution :

 (20)

However, this requires that the the primal and dual objectives and can be accurately evaluated, and requires to compute a minimizer of the problem for the specific input data, which is generally difficult, especially in the spatially continuous formulation.

In contrast, true a priori bounds do not require knowledge of a solution and apply uniformly to all problems of a class, irrespective of the particular input. When considering rounding methods, one generally has to discriminate between

• deterministic vs. probabilistic methods, and

• spatially discrete (finite-dimensional) vs. spatially continuous methods.

Most known a priori approximation results only hold in the finite-dimensional setting, and are usually proven using graph-based pairwise formulations. In contrast, we assume an “optimize first” perspective due to the reasons outlined in the introduction. Unfortunately, the proofs for the finite-dimensional results often rely on pointwise arguments that cannot directly be transferred to the continuous setting. Deriving similar results for continuous problems therefore requires considerable additional work.

### 1.5 Contribution and Main Results

In this work we prove that using the regularizer (9), the a priori bound (16) can be carried over to the spatially continuous setting. Preliminary versions of these results with excerpts of the proofs have been announced as conference proceedings Lellmann2011 . We extend these results to provide the exact bound (16), and supply the full proofs.

As the main result, we show that it is possible to construct a rounding method parametrized by a parameter , where is an appropriate parameter space:

 R: C×Γ→CE, (22)

such that for a suitable probability distribution on

, the following theorem holds for the expectation :

###### Theorem 1.1

Let , , , and let be positively homogeneous, convex and continuous. Assume there exists a lower bound such that, for ,

 Ψ(z) ⩾ λl12l∑i=1∥zi∥2∀z∈Rd×l,l∑i=1zi=0. (23)

Moreover, assume there exists an upper bound such that, for every ,

 Ψ(ν(ei−ej)⊤) ⩽ λu∀i,j∈{1,…,l}. (24)

Then Alg. 1 (see below) generates an integral labeling almost surely, and

 Ef(¯u) ⩽ 2λuλlf(u). (25)

Note that always holds if both are defined, since (23) implies, for with ,

 λu⩾Ψ(ν(ei−ej)⊤) ⩾ λl2(∥ν∥2+∥ν∥2)=λl. (26)

The proof of Thm. 1.1 (Sect. 4) is based on the work of Kleinberg and Tardos Kleinberg1999 , which is set in an LP relaxation framework. However their results are restricted in that they assume a graph-based representation and extensively rely on the finite dimensionality. In contrast, our results hold in the continuous setting without assuming a particular problem discretization.

Theorem 1.1 guarantees that – in a probabilistic sense – the rounding process may only increase the energy in a controlled way, with an upper bound depending on . An immediate consequence is

###### Corollary 1

Under the conditions of Thm. 1.1, if minimizes over , minimizes over , and denotes the output of Alg. 1 applied to , then

 Ef(¯u∗) ⩽ 2λuλlf(u∗E). (27)

Therefore the proposed approach allows to recover, from the solution of the convex relaxed problem (13), an approximate integral solution of the nonconvex original problem (1) with an upper bound on the objective.

In particular, for the tight relaxation of the regularizer as in (9), we obtain (cf. Prop. 10)

 Ef(¯u∗) ⩽ 2λuλl=2maxi≠jd(i,j)mini≠jd(i,j), (28)

which is exactly the same bound as has been proven for the combinatorial -expansion method (16).

To our knowledge, this is the first bound available for the fully spatially convex relaxed problem (13). Related is the work of Olsson et al. Olsson2009 ; Olsson2009a , where the authors consider a continuous analogue to the -expansion method known as continuous binary fusion Trobin2008 , and claim that a bound similar to (16) holds for the corresponding fixed points when using the separable regularizer

 ΨA(z) := l∑j=1∥Azj∥2,z∈Rd×l, (29)

for some , which implements an anisotropic variant of the uniform metric. However, a rigorous proof in the BV framework was not given.

In Bae2011 , the authors propose to solve the problem (1) by considering the dual problem to (13) consisting of coupled maximum-flow problems, which are solved using a log-sum-exp smoothing technique and gradient descent. In case the dual solution allows to unambiguously recover an (integral) primal solution, the latter is necessarily the unique minimizer of , and therefore a global integral minimizer of the combinatorial problem (1). This provides an a posteriori bound, which applies if a dual solution can be computed. While useful in practice as a certificate for global optimality, in the spatially continuous setting it requires explicit knowledge of a dual solution, which is rarely available since it depends on the regularizer as well as the input data .

In contrast, the a priori bound (27) holds uniformly over all problem instances, does not require knowledge of any primal or dual solutions and covers also non-uniform regularizers.

## 2 A Probabilistic View of the Coarea Formula

### 2.1 The Two-Class Case

As a motivation for the following sections, we first provide a probabilistic interpretation of a tool often used in geometric measure theory, the coarea formula (cf. Ambrosio2000 ). Assuming and for a.e. , the coarea formula states that the total variation of can be represented by summing the boundary lengths of its super-levelsets:

 TV(u′) = ∫10TV(1{u′>α})dα. (30)

Here

denotes the characteristic function of a set

, i.e.,  iff and otherwise. The coarea formula provides a connection between problem (1) and the relaxation (13) in the two-class case, where , and : As noted in Lellmann2009a ,

 TV(u)=∥e1−e2∥2TV(u1)=√2TV(u1), (31)

therefore the coarea formula (30) can be rewritten as

 TV(u) = √2∫10TV(1{u1>α})dα (32) = ∫10TV(e11{u1>α}+e21{u1⩽α})dα (33) = ∫10TV(¯uα)dα, (35) ¯uα:=e11{u1>α}+e21{u1⩽α}.

Consequently, the total variation of can be expressed as the mean over the total variations of a set of integral labelings , obtained by rounding at different thresholds . We now adopt a probabilistic view of (35): We regard the mapping

 R:(u,α)∈C×[0,1] ↦ ¯uα∈CE(a.e.% \leavevmode\nobreak\ α∈[0,1]) (36)

as a parametrized, deterministic rounding algorithm that depends on and on an additional parameter . From this we obtain a probabilistic (randomized) rounding algorithm by assuming

to be a uniformly distributed random variable. Under these assumptions the coarea formula (

35) can be written as

 TV(u) = EαTV(¯uα). (37)

This has the probabilistic interpretation that applying the probabilistic rounding to (arbitrary, but fixed) does – in a probabilistic sense, i.e., in the mean – not change the objective. It can be shown that this property extends to the full functional in (13): In the two-class case, the “coarea-like” property

 f(u) = Eαf(¯uα) (38)

holds. Functions with property (38) are also known as levelable functions Darbon2006 ; Darbon2006a or discrete total variations Chambolle2009 and have been studied in Strandmark2009 . A well-known implication is that if , i.e.,  minimizes the relaxed problem (13), then in the two-class case almost every is an integral minimizer of the original problem (1), i.e., the optimality bound (17) holds with Nikolova2006 .

### 2.2 The Multi-Class Case and Generalized Coarea Formulas

Generalizing these observations to more than two labels hinges on a property similar to (38) that holds for vector-valued . In a general setting, the question is whether there exist

• a probability space , and

• a parametrized rounding method, i.e., for -almost every :

 R: C×Γ→CE, (40)

satisfying for all ,

such that a “multiclass coarea-like property” (or generalized coarea formula)

 f(u) = ∫Γf(¯uγ)dμ(γ) (41)

holds. In a probabilistic sense this corresponds to

 f(u) = ∫Γf(¯uγ)dμ(γ)=Eγf(¯uγ). (42)

For and , (37) shows that (42) holds with , , , and as defined in (36). Unfortunately, property (37) is intrinsically restricted to the two-class case with regularizer.

In the multiclass case, the difficulty lies in providing a suitable combination of a probability space and a parametrized rounding step . Unfortunately, obtaining a relation such as (37) for the full functional (1) is unlikely, as it would mean that solutions to the (after discretization) NP-hard problem (1) could be obtained by solving the convex relaxation (13) and subsequent rounding, which can be achieved in polynomial time.

Therefore we restrict ourselves to an approximate variant of the generalized coarea formula:

 (1+ε)f(u) ⩾ ∫Γf(¯uγ)dμ(γ)=Eγf(¯uγ). (43)

While (43) is not sufficient to provide a bound on for particular , it permits a probabilistic bound: for any minimizer of the relaxed problem (13), eq. (43) implies

 Eγf(¯u∗γ)⩽(1+ε)f(u∗)⩽(1+ε)f(u∗E), (44)

i.e., the ratio between the objective of the rounded relaxed solution and the optimal integral solution is bounded – in a probabilistic sense – by .

In the following sections we construct a suitable parametrized rounding method and probability space in order to obtain an approximate generalized coarea formula of the form (43).

## 3 Probabilistic Rounding for Multiclass Image Partitions

### 3.1 Approach

We consider the probabilistic rounding approach based on Kleinberg1999 as defined in Alg. 1.

The algorithm proceeds in a number of phases. At each iteration, a label and a threshold

 γk:=(ik,αk) ∈ Γ′:=I×[0,1]

are randomly chosen (step 3), and label is assigned to all yet unassigned points  where holds (step 5). In contrast to the two-class case considered above, the randomness is provided by a sequence of uniformly distributed random variables, i.e., .

After iteration , all points in the set are still unassigned, while all points in have been assigned an (integral) label in iteration or in a previous iteration. Iteration potentially modifies points only in the set . The variable stores the lowest threshold chosen for label up to and including iteration , and is only required for the proofs.

While the algorithm is defined using pointwise operations, it is well-defined in the sense that for fixed , the sequence , viewed as elements in , does not depend on the specific representative of in its equivalence class in . The sequences and depend on the representative, but are unique up to -negligible sets.

In an actual implementation, the algorithm could be terminated as soon as all points in have been assigned a label, i.e., . However, in our framework used for analysis the algorithm never terminates explicitly. Instead, for fixed input we regard the algorithm as a mapping between sequences of parameters (or instances of random variables) and sequences of states , and . We drop the subscript  if it does not create ambiguities. The elements of the sequence are independently uniformly distributed, and by the Kolmogorov extension theorem (Oksendal2003, , Thm. 2.1.5) there exists a probability space and a stochastic process on the set of sequences with compatible marginal distributions.

In order to define the parametrized rounding step , we observe that once occurs for some , the sequence becomes stationary at . In this case the algorithm may be terminated, with output :

###### Definition 1

Let and . For some , if in Alg. 1 for some , we denote . We define

 f(¯u(⋅)):Γ→R∪{+∞},γ∈Γ↦f(¯uγ), (45) f(¯uγ):={f(uk′γ),∃k′∈N:Uk′γ=∅∧uk′γ∈BV(Ω)l,+∞,otherwise.

We denote by the corresponding random variable induced by assuming to be uniformly distributed on .

As indicated above, is well-defined: if for some then for all . Instead of focusing on local properties of the random sequence as in the proofs for the finite-dimensional case, we derive our results directly for the sequence . In particular, we show that the expectation of over all sequences can be bounded according to

 Ef(¯u)=Eγf(¯uγ) ⩽ (1+ε)f(¯u) (46)

for some , cf. (43). Consequently, the rounding process may only increase the average objective in a controlled way.

### 3.2 Termination Properties

Theoretically, the algorithm may produce a sequence that does not become stationary, or becomes stationary with a solution that is not an element of . In Thm. 3.1 below we show that this happens only with zero probability, i.e., almost surely Alg. 1 generates (in a finite number of iterations) an integral labeling function . The following two propositions are required for the proof.

###### Proposition 1

For the sequence generated by Algorithm 1,

 P(e⊤ck<1)⩾ (47) ∑p∈{0,1}l(−1)e⊤p(l∑j=11l((1−1l)pj))k

holds. In particular,

 P(e⊤ck<1) k→∞→ 1 (48)
###### Proof

Denote by the number of such that , i.e., the number of times label was selected up to and including the -th step. Then

 (nk1,…,nkl) ∼ Multinomial(k;1l,…,1l), (49)

i.e., the probability of a specific instance is

 P((nk1,…,nkl)) = ⎧⎪⎨⎪⎩k!nk1!⋅…⋅nkl!(1l)k,∑jnkj=k,0,otherwise. (50)

Therefore,

 P(e⊤ck<1) = ∑nk1,…,nklP(e⊤ck<1|(nk1,…,nkl))⋅… (52) P((nk1,…,nkl)) = ∑nk1+…+nkl=kk!nk1!⋅…⋅nkl!(1l)k⋅… P(e⊤ck<1|(nk1,…,nkl)).

Since is a sufficient condition for , we may bound the probability according to

 P(e⊤c<1) ⩾ ∑nk1+…+nkl=kk!nk1!⋅…⋅nkl!(1l)k⋅… (53) P(ckj<1l∀j∈I|(nk1,…,nkl)).

We now consider the distributions of the components of conditioned on the vector . Given , the probability of is the probability that in each of the steps where label was selected the threshold was randomly chosen to be at least as large as . For , we conclude

 P(ckj

The above formulation also covers the case (note that we assumed ). For fixed the distributions of the are independent when conditioned on . Therefore we obtain from (53) and (56)

 P(e⊤c<1) ( ???)⩾ ∑nk1+…+nkl=kk!nk1!⋅…⋅nkl!(1l)k⋅ (58) l∏j=1P(ckj<1l|(nk1,…,nkl)) (???)= ∑nk1+…+nkl=kk!nk1!⋅…⋅nkl!(1l)k+ l∏j=1(1−(1−1l)nkj).

Expanding the product and swapping the summation order, we derive

 P(e⊤ck<1) (61) ⩾ ∑nk1+…+nkl=kk!nk1!⋅…⋅nkl!(1l)k⋅ ∑p∈{0,1}ll∏j=1(−(1−1l)nkj)pj = ∑p∈{0,1}l(−1)e⊤p∑nk1+…+nkl=kk!nk1!⋅…⋅nkl!⋅ l∏j=1(1l(1−1l)pj)nkj.

Using the multinomial summation formula, we conclude

 P(e⊤ck<1)⩾ ∑p∈{0,1}l(−1)e⊤p⎛⎜ ⎜ ⎜ ⎜ ⎜⎝l∑j=11l(1−1l)pj=:qp⎞⎟ ⎟ ⎟ ⎟ ⎟⎠k, (62)

which proves (47). At the multinomial summation formula was invoked. Note that in (62) the do not occur explicitly anymore. To show the second assertion (48), we use the fact that, for any , can be bounded by . Therefore

 P(e⊤ck<1) ⩾ q0+∑p∈{0,1}l,p≠0(−1)e⊤p(qp)k (63) = 1+∑p∈{0,1}l,p≠0(−1)e⊤p(qp)kk→∞→0 (64) k→∞→ 1, (65)

which proves (48).

We now show that Alg. 1 generates a sequence in almost surely. The perimeter of a set is defined as the total variation of its characteristic function .

###### Proposition 2

For the sequences , generated by Alg. 1, define

 A := ∞⋂k=1{γ∈Γ|Per(Ukγ)<∞}. (66)

Then

 P(A) = 1 (67)

If for all , then for all as well. Moreover,

 P(uk∈BV(Ω)l∧Per(Uk)<∞∀k∈N) = 1, (68)

i.e., the algorithm almost surely generates a sequence of functions and a sequence of sets of finite perimeter .

###### Proof

We first show that if for all , then for all as well. For , the assertion holds, since by assumption. For ,

 uk = eik1Mk+uk−11Ω∖Mk. (69)

Since , and are assumed to have finite perimeter, also has finite perimeter. Applying (Ambrosio2000, , Thm. 3.84) together with the boundedness of and by induction then provides .

We now denote

 Ik := {γ∈Γ|Per(Ukγ)=∞}, (70)

and the event that the first set with non-finite perimeter is encountered at step by

 Bk := (71)

Then

 P(A) = 1−P(∞⋃k=0Bk). (72)

As the sets are pairwise disjoint, and due to the countable additivity of the probability measure, we have

 P(A) = 1−∞∑k=0P(Bk). (73)

Now , therefore and . For , we have

 P(Bk) ⩽ P(Per(Uk)=∞∧Per(Uk′)<∞∀k′

By the argument from the beginning of the proof, we know that under the condition on the perimeter , therefore from (Ambrosio2000, , Thm. 3.40) we conclude that is finite for