I Introduction
Graphbased methods have become very popular in computer vision and image processing. Graph theory provides an efficient and rigorous framework to model complex relationship between data and has been used for various tasks such image segmentation, motion detection and pattern matching. Typically, images are represented as graphs with the image pixels treated as the graph vertices, whereas the graph edges can be chosen from a wide range of options, depending on the suitability for each particular application. Examples of popular graphbased methods include
Graph Cuts [4, 3, 16], Random Walker [13], [2] and Power Watershed [9, 10].Active Contours form a class of methods that try to minimize continuous energy functionals associated with a particular problem. As implied by their name, they are curves that evolve in the domain of an image and are used, among other applications, for object detection and image segmentation. They were introduced by Kass et al. in the form of “snakes” [15]. Widely used active contour models include Geodesic Active Contours [6] and Active Contours Without Edges [7].
In this paper we will focus on the problem of active contours on graphs. We will generalize the notion of active contours on graphs and present a computational framework that allows the solution of a wide range of active contour models that can be modeled with levelset equations. Our approach is based on the Finite Element method that simplifies the form of the problem from a partial differential equation to a sparse nonlinear system. We will also present an extension to our model that under reasonable assumptions can be used to speed up the curve evolution on large graphs. Our analysis will focus mainly on segmentation applications, however its scope is much more general.
Our main contributions are:

We develop a novel theoretical and computational framework for the solution of general active contour models on graphs using the Finite Element method.

We generalize narrow band levelset methods on graphs and present an efficient algorithm for active contour evolution on large graphs.

We present an extension of the Geodesic Active Contour model that incorporates statistical region information, which achieves results that are within stateoftheart.
In Section II we present a novel approach for the solution of general active contour models on graphs using the Finite Element method. In Section III we provide an extension to our framework that can solve locally constrained active contour models and can also be used for fast contour evolution on graphs. In Section IV we propose a modification of the Geodesic Active Contours model that includes statistical region information. In Section V we provide experimental evidence for the efficiency of our method, focusing on segmentation applications and last, in Section VI we make some concluding remarks.
Ii Active Contours on Graphs
Iia Problem Formulation
In this section we will present a novel method for solving Active Contour evolution equations on graphs employing the Finite Element method. Finite Element Analysis is a powerful framework that enables us to solve complex partial differential equations in a simple and elegant manner. The other popular family of methods for the solution of PDEs is the Finite Difference Method. The main difference between these 2 approaches is that Finite Difference methods try to approximate the differential operators using discrete schemes whereas in the Finite Element Method we approximate the solution of the equation with functions belonging to finite dimensional function spaces. This can be useful, especially in the case of small graphs where the accurate approximation of differential operators is a challenging problem. An example of a Finite Difference approach for the solution of active contour models on graphs is proposed in [23].
In this paper, we will consider general Active Contour models that can be modeled with levelset equations of the form
(1)  
(2)  
(3) 
where , and are functionals of and is the signed distance function from the initial curve.
Popular active contour models that can be expressed in the above form are

Erosion/Dilation:
(4) 
Geometric Active Contours:
(5) 
Geodesic Active Contours:
(6)
where is the edge stopping function defined as
(7) 
As a first step, we will convert (1)’ into an “equivalent” integral equation. To do this we multiply both parts of (1) with a function and integrate in . is the Sobolev space consisting of all functions defined in whose first order derivatives –in the distributional sense– belong in , the space of Lebesgue squareintegrable functions. Thus we have
(8)  
Using Green’s identity
(9) 
and the boundary condition (2) we can obtain the final form of the equation
(10)  
or more generally in functional form
(11) 
We then demand that (11) holds for all . This is called the weak form of (1).
IiB Galerkin Approximation
Until now we have not made a numerical approximation to the problem. We only converted it to an integral form that we expect to be equivalent to the original problem in some sense. We will try to approximate the solution using the Galerkin method. The core of the Finite Element analysis is that we do not approximate continuous differential operators using finite differences but instead approximate the solution of the equations with functions belonging to finite dimensional subspaces of .
Let a Ndimensional subspace of and a basis of . The solution approximation can be written as a linear combination of the basis functions and since we have a timedependent problem we allow the coefficients of the linear combination to be functions of time. Thus we have
(12) 
We demand that (10) holds at least for all the functions that belong in the subspace . Since (11) is a linear functional of and form a basis of , this is equivalent to demanding that it holds for each of the basis functions . If we use the fact that
(13) 
and substitute in (10) we get
(14)  
where
(15) 
This is a nonlinear system of ODEs that can be written in the form
(16) 
where a matrix that depends on and ,
dimensional vectors with
(17) 
and
(18)  
The initial condition for the above system of differential equations can be obtained from the projection of in the subspace , i.e.
(19) 
and thus
(20) 
where is the usual inner product defined on with
(21) 
IiC Generalization on Graphs
Consider a graph where is the set of vertices and the set of edges. We assume that the graph has a total of vertices and each vertex is a point in and can be described by its respective planar coordinates . Note that every finite set of points in the plane can be mapped into the unit square by applying a translation followed by a scaling. Thus if we apply an appropriate geometrical transformation we can map the set of vertices to a subset of . For each vertex we choose a function that belongs in that has the following properties

are linearly independent 22

for , 23
We can see that the functions form a basis of some subspace of . Also, for reasonably smooth functions , is a null set^{1}^{1}1If is zero in an interval, then its derivative is also zero. Thus, if has at most countably infinite isolated roots, the above holds. and thus is also a null set. Thus we can conclude that if and are two nonadjacent vertices of , then and thus are orthogonal.
Proposition 1.
Proof.
Let . We define the functions
(24) 
where is the indicator function of set . Each function is differentiable with
(25) 
Moreover we can see that for all and since is not zero everywhere then are linearly independent. ∎
Next we will describe how we can obtain the solution of (16) on a graph equipped with the functions . First, from (17) we can see that if . If the number of edges of each vertex is small compared to the total number of vertices then is sparse. Additionally the summation in (18) reduced to a summation in the neighborhood of and thus
(26)  
where is the neighborhood of vertex .
IiD Delaunay Graphs
In the previous paragraphs we described how we can generalize active contour models on graphs with arbitrary structure. In the rest of this paper we will limit our analysis in the case of planar graphs and more specifically the case of Delaunay graphs that are constructed from the Delaunay triangulation of a finite set of points in the unit square.
Delaunay triangulation is a method of dividing the convex hull of a set points into triangles that tends to avoid creating sharp triangles, a property that ensures good convergence results for the solution of PDEs. For more details about the Delaunay triangulation and the algorithms used to produce, we refer the reader to the work of Persson and Strang in [21].
In Fig.1
(b) we can see an example of a Delaunay graph with 20 vertices. The vertices were chosen randomly according to the uniform distribution in the unit square. For a Delaunay graph
we will denote the vertices, edges and triangles of the graph by , and respectively.Images can be thought as a special case of Delaunay graphs with the vertices being the pixels of the image. In Fig.1(a) we can see the triangulation of the domain of a image and the corresponding Delaunay graph. In this special case, i.e. a Delaunay graph created from a regular grid of points, each nonboundary pixel is connected with an edge with 6 of its adjacent pixels, forming an asymmetric hexagonal neighborhood.
In the case of graphs is not an image function in the usual sense. Instead we define it to be a function . For the case of the image gradient , however, since we will be using numerical integration it is more convenient to define it as 2D function defined in . This function will piecewise constant in each triangle of and its value is the gradient of the plane that is formed by the values of in the three vertices of the triangle. Similarly, will also be constant in each triangle of the triangulation.
IiE Choice of subspace and basis
One of the most important parts in our method is the choice of a subspace of and a set of basis functions .
Let be the triangles of the Delaunay Triangulation. We will try to approximate the solution with a function belonging to the space of continuous functions in which are linear in each triangle of the triangulation. We will denote this space as . More formally
(27) 
This choice of subspace provides good convergence properties and also simplifies calculations as we shall see later on.
After the selection of the subspace we have to choose a basis of .
A possible choice satisfying the properties previously are the pyramid functions. For each vertex of the graph we define the function as


, for

is linear in each triangle
It easy to verify that if and . It can also be proven that the are linearly independent and thus they form a basis of .
In Fig.2 we can see the form of the functions for the case of a rectangular grid. With this choice we can see that overlaps only with the 6 other that correspond to the vertices that are adjacent to . Thus each row of will contains at most 7 non zero elements and at the same time only 7 different coefficients will appear in these expressions.
We can think of the
as functions that interpolate discrete data in a rectangular grid. Assume for example that we have samples of a 2D function
at points in the plane. By creating the Delaunay graph that corresponds to the above set of points we can construct a continuous function that approximates as(28) 
From the above construction and the properties of , it is easy to verify that and in each triangle of the triangulation we perform a linear interpolation. In Fig.3 we can see an example of an interpolation of discrete data in a rectangular grid.
IiF Time Evolution
The final step that remains is the discretization in time. For a system of ODEs we have several options for the approximation of derivatives, each one with its advantages and disadvantages and the choice of a particular method depends on the specific properties of each problem.
We want to caclulate the solution in a subset of the time interval starting from the initial condition until we reach convergence. Let be the sequence of time points at which we calculate the solution with and . Here we assume that we use a fixed time step . Also with we will denote the approximation of .
IiF1 Explicit Euler method
Here we approximate the time derivative with the forward difference
(29) 
If we substitute this into (16) we obtain
(30) 
This means that we need to solve a linear system for each time step. This method is easy to implement but puts limits on the choice of time step . We need to choose a very small time step –often in the order of to ensure that the curve evolution is stable.
IiF2 Implicit Euler method
In this method we approximate the time derivative with the backward difference
(31) 
If we substitute again the above relationship into (16) we obtain
(32) 
which is a non linear system of equations. This adds a further computational burden in each time step but its main advantages is that it allows us to use a relatively larger time step than the explicit Euler since it has a larger region of stability.
IiG Algorithmic Complexity
In this part we will try to provide an estimate for the complexity of the presented algorithm using the explicit Euler method for the time derivative approximation. For the sake of simplicity let us consider the case of the Delaunay graphs that correspond to images similar to the one depicted in Fig.
1(a). We assume that we have a image. The resulting graph will have a total of vertices. Since each row of contains at most 7 nonzero elements can be computed in linear time in each time step. Also due to its sparsity, matrix multiplication can also be done in linear time. Observing that can also be computed in linear time the right hand side needs time in each step. With an appropriate node labeling, takes the form of a band matrix with a band length of . In the case of square images this is equal to . For rectangular images it is safe to assume that and are of comparable size and thus this holds also for and , so the results obtained for square images will be also valid in the more general case. We can see the sparsity pattern in Fig.4.For a band matrix with band length the solution of a linear system needs operations. Thus in the case of a image we need to do operations in each time step, which is prohibitive for large images.
As far as it concerns the time evolution, the total number of steps until convergence cannot be specified a priori since it depends on the shape and position of the initial curve. If the initial curve is close to the object boundaries only a few time steps are needed until convergence. Thus the computational complexity of the method can is proportional to the total number of steps until convergence.
IiH Numerical Calculations
Here we will briefly describe how we can compute the elements of the matrix and vector . First note that since , the equations (17) and (18) can be rewritten as
(33) 
and
(34)  
For each , is zero everywhere except for the triangles formed by the vertex where it is linear. Thus is piecewise constant in each triangle with at least where . If we want to be mathematically precise, is differentiable everywhere except for the edges of the triangles, but since lines are sets of zero measure in the values at the edges do not contribute to the value of the integral.
Taking all these into account we conclude that to calculate and we have to sum the contributions of the integrals in a small number of triangles. In the case of the diagonal elements , we have to sum over all the triangles that have as a vertex, whereas for the nondiagonal terms the summation is done over at most 2 triangles, the shared triangles between the vertices and .
If is a triangle with vertices , and and a polynomial of degree at most 2 then it holds that
(35) 
where is the area of the triangle. Thus using the above formula we can obtain closedform formulas to compute equations (33) and (34).
Finally, as far as it concerns the initial condition we can approximate (20) by setting
(36) 
and thus
(37) 
Under reasonable assumptions we expect that the two expressions for the initial condition give similar values and thus we can adopt this more straightforward approach that does not involve complex integrations.
Iii Locally Constrained Contour Evolution
Iiia Method Description
In this section we will extend the previous framework to solve curve evolution models of the form
(38) 
where is an approximation of the Dirac function, with , as . A typical choice for is the piecewise constant approximation
(39) 
The term constrains the curve evolution in a small area near the current position of the curve, i.e the 0levelset. One popular active contour model that can be modeled using (38) is the Active Contour Without Edges (ACWE), presented by Chan and Vese in [7]. In the ACWE model, if we use the same notation with [7],
(40) 
The constants and are the mean values of the image function on the inside and the outside of the contour respectively and , and positive constants. Also, as will be shown later, constrained curve evolution can be used to speed up the solution of active contour models that can take the form of (1).
First, we will describe how (38) can be approximated on graphs. For this reason we define the sets
(41) 
(42) 
and
(43) 
, and are the set of points that are outside, inside and on the curve at time . We also define the set of active points at time
(44) 
where with we denote the boundary of the set that can be calculated as
(45) 
The set of active points represent the points that are within “unit” distance from the current position of the active contour. In Fig.5 we depict visually how the active points are computed.
Then —up to a positive scaling factor— we approximate on at time with
(46) 
Using the above definition, the evolution equation for (38) can be approximated as
(47) 
where is a matrix with
and a dimensional vector with
(48) 
It is easy to observe that for all nonactive vertices , and thus the curve evolution is indeed constrained at the subset .
The above formulation will help us reduce significantly the computational complexity of each time step. Consider the explicit Euler approximation of (47)
(49) 
For all vertices that are not in the set of active points . So, for all , does not depend on the values , where . Thus the corresponding values have no influence on the solution of the linear system, so they can be set to .
Let be a submatrix of that is constructed by keeping only the elements with and . Similarly is a dimensional vector that is derived from by discarding all the elements with . If we set and , where is the projection matrix with and if then we get the equivalent linear system
(50) 
After computing from (50) the update rule for the coefficients in matrix notation becomes
(51) 
To ensure the stability of the numerical method and make it possible to use large time steps, we normalize the levelset function after each time step. Specifically, and consequently the vector of coefficients is saturated outside the interval . A typical choice of is or equivalently . With the use of normalization we can choose a time step , which is significantly larger than the time step used for the solution of (1). A similar approach cannot be adopted for the solution of the full levelset equation because in each time step the evolution domain covers the whole graph and although the levelset function will be bounded, large time steps will produce noisy artifacts.
IiiB Estimation of Algorithmic Complexity
As in the previous section, we will provide an estimate for the computational complexity of the proposed algorithm for the case of Delaunay graphs that correspond to images. We expect that under reasonable assumptions, similar results hold for more general Delaunay graphs.
For a image with pixels, each of , and is the union of a number of 1D curves. Generally, typical 1D curves in a grid contain pixels. Thus, in almost all practical cases will contain pixels. Since the number of edges in the planar Delaunay graph is bounded by a naive calculation of using the definitions of its components requires operations. Additionally, it is easy to verify that given the set of active points, the elements of and can be computed in time.
Since is a sparse band matrix, will also be a band matrix, but its band length often can be . However, using the Reverse CuthillMcKee algorithm [12] which can be implemented in time we can obtain a permuted matrix with a band length of on average. Thus the solution of the constrained linear system is expected to require operations.
Thus, we have shown that on average, each time step of the algorithm requires operations. If we compare this result with the number of operations required for the full curve evolution, we can see that the constrained curve evolution algorithm is faster by an order of magnitude. Its linear complexity with respect to the number of graph vertices makes it feasible to be used in practical applications.
IiiC Fast Solution of Active Contour Models
The results from the previous section hint towards a fast implementation of general active contour models that can take the form of (1). The levelset approach enabled us to handle topological changes in the curve in a solid and efficient manner but it introduces an extra computational burden because we have to evolve a 2D function instead of a curve. The answer to this problem is to try to limit our focus in a small area of interest near the curve –often referred as band– and evolve the levelset function in this subset of the image. Similar methods, called narrowband methods, were described for the case of images in [1, 24, 20].
In this paper we will extend the previous ideas and generalize them on graphs. Instead of evaluating the curve evolution in the whole graph we will constrain the curve evolution near the curve using the method of Section III
. Thus approximation is based on the assumption that the levelset function evolves “isotropically”, thus points outside or inside the curve will not change status at least until the moment that the active contour reaches them. The above assumption is valid for the majority of the active contour methods, such as the simple Erosion/Dilation and the Geometric and Geodesic Active Contours.
Generally, the set of active points is comprised of vertices that are either outside, inside or on the contour. For certain active contour models such as those mentioned above where the curves either expand or shrink, the set of active points can be further reduced. In the case of expanding contours, if a point is inside the contour then it will always remain inside it, so it suffices to choose . Following the same logic, for shrinking contours . Generally, this observation allows us to reduce the size of the by a factor of 2.
Iv Geodesic Active Regions
The classic Geodesic Active Contours method tries to detect objects of interest using only edge information. This poses a serious limitation to the range of images where it can be used. Whilst it gives excellent results when applied in images where the background is smooth and the objects are easily distinguishable, we cannot expect to apply it successfully in complex real world images in which object boundaries are not always welldefined. Thus we need to incorporate more features in our model, such as the foreground and background color distributions that will help us separate the objects of interest from the background. Similar efforts that try to include region information in Active Contour Methods were presented in [25] and [19].
In this section, we will discuss a modification of the GAC algorithm resembling the Geodesic Active Regions [19] that accounts for these issues. We have to note that GACs belong to the class of semisupervised methods; the user only needs to specify a curve that surrounds the object. Here we propose a supervised method in which the user needs to provide additional seed points inside the object(s) he wants to detect. The concept of our method is similar to that of GrabCut [22]; iteratively segment and reestimate region statistics until convergence.
Given a set of points in the plane, we form the corresponding Delaunay graph. First, the user specifies a region of interest that contains the object that is to be detected. The points outside this region are assumed to belong to the background . The user also specifies one or more closed curves inside the objects of interest. The points that are inside the regions enclosed by these curves belong to the foreground . Optionally, the user can specify an additional set of points that wants to be labeled as background. The remaining points
are temporarily classified as undefined and our purpose is to determine their labeling.
Using the information provided above we will try to model the color distributions of the foreground and background. We build two 3D histograms, one for the foreground and one for the background and train the 2 models using and
respectively. Histograms are used instead of Gaussian Mixture Models because they tend to be faster and in our case yield slightly better results.
Next, for each vertex
we compute the likelihood that it belongs to the foreground and the background based on the constructed histograms. We will denote these probabilities by
and respectively. We also define the quantity(52) 
If then which in turn means that the vertex is more likely to belong to the foreground, whereas if then is more likely to belong to the background. In order to sharpen and result in better separation of foreground and background pixels we apply soft thresholding on
using a sigmoid function and obtain
(53) 
In this paper, we chose to be the common logistic function
(54) 
where a sufficiently large positive constant. The role of is to push faster positive values of to and negative values to producing a nearbinary result which will be useful for the curve evolution.
In the GAC model the initial contour can only shrink. Our model can include both shrinking and expanding contours and enables us to combine region and edge information in a straightforward way. If we choose to be the usual edge stopping function defined in (7) our modified Geodesic Active Contour model becomes
(55) 
where
(56) 
The use of is to stop the contour evolution at the desired regions. In the case of expanding contours, we expand the curve until it reaches the background, where . Conversely, in the case of shrinking contours, we shrink the curve until it reaches the foreground, where . Concurrently, the balloon force is also controlled by which vanishes at strong edges.
The core principle our algorithm is to alternate between expanding and shrinking curves to successfully extract the objects of interest. In the first iteration of the algorithm we train the color models using the initial seed points and expand the object seed boundaries according to (56) until convergence. We then obtain new estimates for the foreground and background and use them to retrain the color histograms. Vertices in the undefined region are not used for model training. The main reason for this choice is that it could result in a biased estimation of the statistics. Next we shrink the curves using the new estimations in order to make sure that points that were falsely classified as foreground in the first iteration are discarded. This process is repeated until convergence, i.e. alternating between expanding and shrinking contours and retraining the models after each step until the segmentation result between two successive iterations is sufficiently similar.
The last step of the algorithm will be to provide a refinement of the segmentation result we obtained. We will try to alter the position of the contour in order to better capture the object boundaries. This process is different from the various matting techniques (see for example Bayesian matting in [8]) because our goal is to produce only a binary result and not continuous alphavalues. To do this we limit our focus in a small band around the contour. For each pixel in this area we compute the likelihoods and as we defined them previously, but now the foreground and background model are trained using only vertices that are near the region boundaries. Then for each vertex , if is assigned to the foreground, else it is marked as background. However, since this process can create a noisy output resembling saltandpepper noise, we apply median filtering to the result obtained.
V Applications to Segmentation
In this section we will provide experimental evidence for the effectiveness of the proposed framework. Our analysis will focus on segmentation applications, which is one of the main areas that active contours have traditionally been used in. We will use three different active contour models as representatives; the ChanVese ACWE [7], the GAC [6] and the Geodesic Active Regions (GAR) variant that is proposed in this paper. Our framework will be tested on Delaunay graphs formed from regular grids as well as on more general Delaunay graphs. For the case of images where the number of graph vertices is of the order of , the evolution equation of the GAR was solved using the Constrained Contour Evolution framework described in Section III.
Va Image Segmentation
In this part, we will present how our framework can be used to solve active contour models for image segmentation. As an example, in Fig.6 we depict the curve evolution of the ACWE model [7] for grayscale image segmentation using the proposed framework. Also, in Fig.7 we can see an application of the GAR model that is also solved with the computational framework of Section III.
Next we will evaluate the performance of the proposed GAR for image segmentation and provide comparisons against four stateoftheart seeded image segmentation methods: GrabCut [22], Laplacian Coordinates [5], Power Watersheds [9] and Random Walker [13] as well as with the ACWE algorithm [7]. For the benchmarks we use 2 different datasets. First, the popular GrabCut database consisting of 50 images, the ground truth segmentations and foreground and background seed locations. 20 of those images are taken from the Berkeley Segmentation Dataset [17]. The second dataset consists of 40 images from the PASCAL VOC dataset [11]. For the purposes of this benchmark, the foreground and background seed images were obtained by eroding the respective regions in the ground truth segmentation image, producing seeds similar to those in the GrabCut dataset.
The metrics we use to compare the above methods are

Rand Index: Measures the similarity between the segmentation and the ground truth by calculating the fraction of point pairs that are classified in the same set in the two segmentations. In our benchmarks we use the adjusted form of the Rand Index as proposed in [14].

Intersection over Union: It is the ratio of the intersection of the segmentation region with the ground truth divided by their union.

Variation of Information (VOI): It is a measure of the distance between a segmentation and the ground truth in terms of their entropies and their mutual information [18].

Error rate: It is simply the percentage of pixels that were misclassified.
All the above metrics were calculated in the unlabeled regions as provided in the trimap images. Also we exclude the pixels in the ground truth segmentations that are marked as undefined. Typically, these pixels are located near the boundaries and the image resolution is not sufficient to classify them either as foreground or as background. By comparing our benchmark results with those presented in other papers using the same publicly available implementations we noticed a slight decrease in performance. This difference is due to the fact that in the calculations of the segmentation metrics in these papers all image pixels are used, including those already marked as foreground and background. Also, in the case of the Rand Index, there are several different definitions in the bibliography and thus its exact value depends on the particular choice.
In Table I we can see the benchmark results for the 5 methods on the GrabCut dataset. We can see that GrabCut outperforms the other algorithms. Our method performs very well, achieving similar results and an accuracy of over 90% and at the same time surpasses Power Watershed and Random Walker.
Additionaly, in Table II we compare the methods on the subset of the Pascal dataset. We can see that our method outperforms all other methods except for GrabCut and that the performance margin between these 2 methods is small. Thus from the results on both datasets we can conclude that our algorithm can be used as a reliable alternative to the other widely used methods.
In Fig.8 we can see the different segmentation results for 5 images from the GrabCut datavase and 4 images from the PASCAL dataset. This visualization helps us understand some features of each method. For example we can see that GrabCut and our method perform better when dealing with sharp objects or objects that have holes and non smooth boundaries. Laplacian Coordinates tend to produce smoother object boundaries and this why it fails in cases similar to those of the second and fourth image.
Method  RI ()  IoU ()  VoI ()  Error () 

GC  0.7861  0.8796  0.5419  5.869 % 
LC  0.7763  0.8671  0.5642  6.208 % 
PW  0.7171  0.8358  0.6768  7.977 % 
RW  0.7200  0.8343  0.6652  7.854 % 
CV  0.2899  0.4833  1.2244  24.828 % 
Ours  0.7268  0.8519  0.6704  7.793 % 
Method  RI ()  IoU ()  VoI ()  Error () 

GC  0.6939  0.8321  0.7113  8.945 % 
LC  0.5861  0.7566  0.8834  12.421 % 
PW  0.5683  0.7639  0.9345  12.926 % 
RW  0.3898  0.6872  1.1578  20.329 % 
CV  0.2045  0.4142  1.2056  29.744 % 
Ours  0.6858  0.8317  0.7266  9.309 % 
VB Graph Segmentation
In this part, we will show how our active contour framework can be used for graph segmentation. First, we implemented the GAC method on Delaunay graphs, as discussed earlier. In Fig.9 we present an example of geographical data segmentation, using the GAC method [6]. The data used is from the Greek Referendum of 01/07/2015 and the “feature” function is the percentage of “NO” votes in each polling place. The vertices of the graph correspond to the positions of the polling stations that are scaled to fit in the unit square. It is worth noting that the classic GAC method can only extract clusters that are inside the initial curve and the segmentation result depends greatly on the position of the initial contour.
Additionally, we will explore how the proposed Geodesic Active Regions model performs on nonregular graphs, by testing it in subsampled versions of the images from the GrabCut dataset. Specifically, let be an image from the above dataset. For a positive number we create a Delaunay graph with vertices that lie inside the unit square. Then for each vertex we define . Our purpose is to provide a segmentation on equipped with the feature function . Usually is significantly smaller than the total number of pixels in an image and this subsampling can help us obtain a fast estimation of the segmentation regions in cases we can afford slightly reduced accuracy. As we can see in Table III, there is a noticeable performance drop as the graph size decreases, but for the most part this can be attributed to the loss of image detail as we move to lower resolutions. An additional reason for the performance degradation is the fact that active contour methods are continuous methods modeled with PDEs and thus their accuracy may suffer when using a coarse discretization. In this series of experiments, the vertices of the Delaunay graphs are irregularly spaced and are created using a variant of the method described in [21]. Another possible option for the creation of the graphs that is used in [2] is to perform watershed segmentation in the original image and choose the vertices as the centroids of the oversegmented regions.
Method  RI ()  IoU ()  VoI ()  Error () 

64  0.4920  0.7147  1.1019  14.368 % 
32  0.5265  0.7418  1.0438  12.771 % 
16  0.5624  0.7622  0.9832  11.236 % 
8  0.5917  0.7782  0.9375  10.173 % 
Full size  0.7268  0.8519  0.6704  7.793 % 
Vi Conclusion
In this paper we discussed the problem of active contours on graphs. With the use of the Finite Element method we generalized active contour models on graphs developed a novel computational framework to solve the corresponding levelset equations. Our method can be implemented in arbitrary graphs, however we focused on the family of Delaunay graphs that under certain conditions ensures good convergence properties for the solution of PDEs. One of the main advantages of our method is that it can give more accurate results in small graphs where Finite Difference approaches struggle to approximate efficiently the differential operators involved. Next, we extended the proposed framework to solve locally constrained active contour models and presented a generalization of narrow band levelset methods on graphs that allows to perform fast contour evolution on large graphs. This extension allowed us to reduce the computational complexity by an order of magnitude. Subsequently, we presented several applications to image and graph segmentation. In order to demonstrate the effectiveness of our framework, we developed a supervised extension of the Geodesic Active Contours that uses statistical region information. The proposed method achieves results that are within stateoftheart. In addition, our method is applicable both on images and irregularly spaced graphs, while being supported by a solid theoretical model and an efficient algorithm.
References
 [1] D. Adalsteinsson and J. A. Sethian, “A fast level set method for propagating interfaces,” J. Comput. Phys., vol. 118, no. 2, pp. 269–277, 1995.
 [2] C. G. Bampis, P. Maragos, and A. C. Bovik, “Graphdriven diffusion and random walk schemes for image segmentation,” IEEE Trans. Image Process., vol. 26, no. 1, pp. 35–50, Jan. 2017.
 [3] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 11, pp. 1222–1239, 2001.
 [4] Y. Y. Boykov and M. P. Jolly, “Interactive graph cuts for optimal boundary & region segmentation of objects in ND images,” in Proc. Int’l Conf. on Computer Vision (ICCV ’01), vol. 1, 2001, pp. 105–112.

[5]
W. Casaca, L. G. Nonato, and G. Taubin, “Laplacian Coordinates for seeded
image segmentation,” in
Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR ’14)
, Jun. 2014, pp. 384–391.  [6] V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic Active Contours,” Int’l Journal of Computer Vision, vol. 22, no. 1, pp. 61–79, 1997.
 [7] T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Trans. Image Process., vol. 10, no. 2, pp. 266–277, Feb. 2001.
 [8] Y.Y. Chuang, B. Curless, D. H. Salesin, and R. Szeliski, “A Bayesian approach to digital matting,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR ’01), vol. 2, 2001, pp. 264–271.
 [9] C. Couprie, L. Grady, L. Najman, and H. Talbot, “Power watershed: A unifying graphbased optimization framework,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 7, pp. 1384–1399, Jul. 2011.
 [10] J. Cousty, G. Bertrand, L. Najman, and M. Couprie, “Watershed cuts: Minimum spanning forests and the drop of water principle,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 8, pp. 1362–1374, Aug. 2009.
 [11] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes (VOC) challenge,” Int’l Journal of Computer Vision, vol. 88, no. 2, pp. 303–338, 2010.
 [12] A. George and J. W. H. Liu, Computer Solution of Large Sparse Positive Definite Systems. PrenticeHall, 1981.
 [13] L. Grady, “Random walks for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 11, pp. 1768–1783, Nov. 2006.
 [14] L. Hubert and P. Arabie, “Comparing partitions,” J. Classification, vol. 2, no. 1, pp. 193–218, 1985.
 [15] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” Int’l Journal of Computer Vision, vol. 1, no. 4, pp. 321–331, 1988.
 [16] V. Kolmogorov and R. Zabin, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 2, pp. 147–159, Feb. 2004.
 [17] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR ’01), vol. 2, 2001, pp. 416–423.
 [18] M. Meilǎ, “Comparing clusterings: An axiomatic view,” in Proc. ICML, 2005, pp. 577–584.
 [19] N. Paragios and R. Deriche, “Geodesic Active Regions: A new framework to deal with frame partition problems in computer vision,” J. Vis. Comm. Image Represent., vol. 13, no. 1, pp. 249–268, 2002.
 [20] D. Peng, B. Merriman, S. Osher, H. Zhao, and M. Kang, “A PDEbased fast local level set method,” J. Comput. Phys., vol. 155, pp. 410–438, 1999.
 [21] P.O. Persson and G. Strang, “A simple mesh generator in MATLAB,” SIAM Rev., vol. 46, no. 2, pp. 329–345, 2004.
 [22] C. Rother, V. Kolmogorov, and A. Blake, ““GrabCut”: Interactive foreground extraction using iterated graph cuts,” ACM Trans. Graph., vol. 23, no. 3, pp. 309–314, Aug. 2004.
 [23] C. Sakaridis, K. Drakopoulos, and P. Maragos, “Theoretical Analysis of Active Contours on Graphs,” SIAM J. Imaging Sciences, accepted for publication, to appear.
 [24] R. T. Whitaker, “A levelset approach to 3D reconstruction from range data,” Int’l Journal of Computer Vision, vol. 29, no. 3, pp. 203–231, 1998.
 [25] S. C. Zhu and A. L. Yuille, “Region Competition: Unifying snakes, region growing and Bayes/MDL for multiband image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 884–900, Sep. 1996.
Comments
There are no comments yet.