I Introduction
Segmentation and registration are crucial tasks in medical image interpretation. While manual segmentation by an expert is accurate, it is also very time consuming and expensive, making computational techniques designed to aid in segmentation an important field of research [1, 2]. Such techniques are even more necessary in registration, where manual results are often infeasible to obtain [3].
Ia Discrete Segmentation
Discrete formulations have become popular for automated image segmentation [4, 5, 6, 7, 8], with the problem of finding an optimal segmentation cast as minimizing a Markov random field (MRF) energy, allowing powerful techniques from discrete optimization to be leveraged. The random walker (RW) algorithm is a popular, efficient, and flexible discrete technique that was initially developed by Grady for image segmentation (RWIS) [9]. Minimizing the RW MRF reduces to solving a system of equations, with the system matrix given by the Laplacian of a weighted graph defined over the image. The Laplacian is a sparse, positive semidefinite matrix (defined in detail in Sec. IIA), thus RWIS is relatively straightforward to implement and provides a globally optimal solution. RWIS provides several other features that make it useful for medical segmentation applications: it extends trivially to multilabel segmentation and higher dimensional images; it provides a probabilistic segmentation, useful for evaluating segmentation uncertainty, which can in turn be used to identify segmentation errors [10, 11, 12]; and it allows user interaction through user specified “seed” voxels that are fixed by the user.
Interactive segmentation has been an active research field in medical imaging, due to the difficulty of the segmentation problems and the frequent necessity of expert verified accuracy [1, 2, 4, 13, 12]. While the listed properties make RWIS a competitive interactive segmentation algorithm, its main drawback is the cost involved in solving the large system of equations between each batch of user input, which can result in time spent waiting for the user, particularly for volumetric images. This issue was greatly mitigated by Grady and Sinop [14] by taking advantage of the time an image is available offline (after acquisition but before any user interaction) to precompute eigenvector/value pairs of the Laplacian that can greatly speed up the RW algorithm online (when a user is actively interacting with the image). The number of eigenvectors controls the tradeoff between speed and accuracy, and it was shown that a significant speedup can be achieved with a minimal loss in accuracy. This technique was extended by Andrews et al. [15] to apply even when prior information was not available offline [16]. We refer to this technique as fastRWIS.
IB Discrete Registration
The success of discrete techniques for image segmentation has lead to similar discrete formulations for registration [17, 18, 19]
. Some discrete registration techniques are based on multilabel optimization techniques, with labels corresponding to displacement vectors from a discrete predefined set. An example of such a technique is the RW image registration (RWIR) technique developed by Cobzas and Sen
[20]. As a globally optimal deformable registration framework, RWIR has been growing in popularity, with multiple recent extensions [21, 22, 23, 24]. As with RWIS, the uncertainty information provided by RWIR has been utilized to identify registration error [25].While RWIR permits user interaction through the placement of landmark points, it is more often guided by prior probabilities for the displacements derived from image similarity terms. Andrews et al. demonstrated that the precomputation techniques applied to RWIS could be utilized to increase the RWIR speed even when no user interaction is used
[26]. The key idea is to redefine “offline time” as the time when only one of the images being registered is available, which can often be a significantly long period (e.g. when one image is an atlas). Precomputation performed offline on the available image can then be used to speed up RWIR online, when other images become available and the prior probabilities for each displacement vector can be calculated. We refer to this technique as fastRWIR.IC Contributions
While the fastRW techniques have been shown to achieve excellent increase in online speed with minimal loss in accuracy, they suffer from one fundamental drawback  they rely on data computed offline, when information regarding the problem to be solved is incomplete, thus limiting the flexibility of the RW methods. Specifically, the fastRW algorithms require the Laplacian matrix and number of precomputed eigenvector/value pairs to be fixed offline. In this paper, in order to mitigate this drawback, we present techniques for:

Efficient online estimation of the number of eigenvectors required for a desired accuracy. This technique is based on the available online information, specifically, user seeds for RWIS and prior displacement probabilities for RWIR.

Updating the eigenvectors/values when the image graph edge weights are altered online, e.g. to provide stronger regularization across certain image edges.
The remainder of the paper is laid out as follows. In Sec. II, we review the RW algorithm and fastRW extensions, and discuss connections to the normalized cuts segmentation framework [7] that will be relevant when introducing our techniques in Sec. III. We then demonstrate the practical usefulness of our techniques in Sec. IV and conclude the paper in Sec. V.
Ii Fast Random Walker Overview
Iia Random Walker Algorithm
Let be an image over the voxel set , with . The RW algorithm assigns each voxel a length probability vector (i.e. a positive vector that sums to ), which we will denote by the
. In RWIS, these probabilities correspond to different structures in an image. In RWIR, the probabilities correspond to different potential displacement vectors.The RW algorithm consists of defining constraints on and then regularizing it subject to these constraints. Typical constraints are either soft, consisting of an matrix of prior probabilities that should be similar to, or hard, consisting of user specified “seeds” voxels that have one label fixed to probability . In RWIR, these seeds take the form of landmarks. In order to simplify the notation related to the seed constraints, we reorder the voxels so the rows corresponding to seeded voxels come first, and then divide into “s”eeded and “n”onseeded components using block matrix notation:
(1) 
where is a fixed constant and are the remaining unknowns. We denote as the number of seeds.

In RWIS, the prior probabilities can be calculated from statistics on the user input seeds, allowing spatially disjoint objects to be segmented without seeding each object individually [16]. In RWIR, the soft constraints are usually based on a local image similarity measure, and are the driving force behind the registration.
The RW regularization is done using an image graph, with vertices corresponding to voxels and weighted edges between neighboring voxels, where the weights control how similar the probabilistic labels of neighboring voxels should be. We define as the matrix of edge weights with components
(2) 
Here, , is the image’s intensity at , is the set of voxels neighboring , and is a scalar parameter. Letting be the diagonal matrix with diagonal entries given by the row sums of (or column sums, since is symmetric), the Laplacian matrix of the graph is defined as .
The RW algorithm calculates the label probabilities by solving, for each ,
(3) 
where is a scalar parameter. Writing and in block matrix notation similar to (1),
(4) 
the globally optimal solution to (3) for each can be found simultaneously by solving the linear system of equations:
(5) 
where
is an appropriately sized identity matrix. This equation consists of
linear systems, each of size , and solving them can be very computationally expensive, which can disrupt the workflow when a user is waiting to provide additional input. However, the system matrix is mostly known offline; the only unknown is the rows and columns corresponding to the seeded voxels. This suggests precomputation could be performed on the matrix to quickly approximate the solution to (5) online.IiB Fast Random Walker Through Precomputation
The precomputation scheme of Grady and Sinop [14] is based on an eigenvector decomposition of the Laplacian:
(6) 
where the columns of are the eigenvectors and
is a diagonal matrix of the eigenvalues, ordered from smallest to largest. The Laplacian is known to be positive semidefinite, with one zero eigenvalue corresponding to the normalized constant vector
g, with components . We now show how we use these eigenvectors to quickly find an approximate solution to (5). Following Andrews et al. [15], we define as the pseudoinverse of , given by(7) 
In block notation,
(8)  
(9) 
In practice it is infeasible to calculate all eigenvectors of , as would have nonzero elements, which would be too large of a matrix from both a computational and memory perspective, particularly for volumetric images. Thus, we only calculate the eigenvectors, corresponding to the smallest eigenvalues, since the smallest eigenvalues give the best approximation to .
We now define the unknown :
(10)  
(11)  
(12)  
(13) 
Left multiplying by gives
(14)  
(15)  
(16) 
where (15) takes only the nonseeded rows and uses the block notation for from (8) and from (4). Equation (16) has two unknowns, and ; the strategy now is to remove and solve for . To remove , we right multiply (15) by and subtract it from (11), giving
(17) 
which has as the only unknown, and can thus be found by solving the systems of equations in (17). These systems are only of size , and can be solved efficiently. Finally, we can solve for directly using (15) instead of solving the system of equations in (5).
We note that if (no prior probabilities), (14) must be altered, since has the constant vector g as a zero eigenvector, so . A technique to address this case by was presented by Grady and Sinop [14], which involves first calculating , the column sums of , and using those to calculte .
The RW algorithm ran using this precomputation technique will be referred to as “fastRW”, and without as “basic RW”. fastRW has been shown to provide an excellent speed increase in both RWIS [14, 15] and RWIR [26] with only a minimal loss in accuracy. However, some questions still remain:

How large should be? A larger leads to greater accuracy, but greater runtime both offline and online, since the online phase involves multiplications between matrices of size .

What if the Laplacian is not known offline? Particularly, what if one wants a different value for in (2)?
So far, justification of the above precomputation technique is largely algebraic. In order to answer these questions, we provide a more intuitive interpretation of the eigenvectors .
IiC Relationship to Normalized Cuts
Grady and Sinop [14] showed their precomputation technique could be thought of as incorporating user interaction into the normalized cuts segmentation technique of Shi and Malik [7]. Given a weighted image graph, a normalized cut seeks to find a partition of the vertices into two sets, and that minimizes the normalized cut value
(18)  
(19)  
(20) 
While finding the minimizing normalized cut is intractable, the relaxed solution (assigning real numbers instead of binary indicators to each vertex) is found by calculating the eigenvectors with smallest eigenvalues in the generalized eigenvector system (recall ):
(21) 
Defining as the normalized Laplacian, (21) can be rewritten as a standard eigenvector system:
(22) 
where . Note that since the eigenvectors are orthogonal, they tend to represent complimentary cuts, representing different structures in the image (Fig. 1).
To complete the connection, we note that the RW algorithm can be formulated using the normalized Laplacian instead of by defining and , substituting these new variables into the RW equation (5), solving for , and then recovering . This results in the precomputation scheme from Sec. IIB calculating, in (6), the eigenvectors of instead of , which has been advocated because the normalized Laplacian tends to have a better behaved spectrum than the unnormalized Laplacian [14, 27]. In this formulation, the columns of correspond to relaxed normalized cuts of the image. In the subsequent sections, we assume the normalized Laplacian is used, but maintain the use of , , and for consistency, explicitly noting any differences that would arise when using the normalized instead of the unnormalized Laplacian.
Shi and Malik explore several techniques for using the eigenvectors to segment an image, such as thresholding the eigenvectors [7]. In the subsequent section, however, we only require the intuition that the fastRW techniques are approximating the original image with its most prominent structures as determined by normalized cuts, and, as seen in (16), the segmentations are linear combinations of the eigenvectors.
Iii Increasing Precomputation Robustness
Iiia Determining the Number of Eigenvectors
Computing eigenvectors of the Laplacian is expensive, so there may be a limit to the number that can be computed offline, though this limit depends on factors such as the amount of computation power available and the throughput of images. More importantly, the online runtime scales linearly with , the number of eigenvectors used, so with large enough fastRW would be more computationally expensive than basic RW. Thus, we focus on choosing an appropriate number of eigenvectors online, and assume at least that many are computed offline.
The accuracy of fastRW is characterized by how close the solution it produces (using (15) and (17)) is to the basic RW solution (produced by using (5)), in terms of, for example, the Dice similarity coefficient for RWIS or the mean overlap between images for RWIR (described in more detail in Sec. IV). However, the relationship between accuracy and for any prior probabilities and user input seeds is difficult to characterize. For example, in RWIS, if the user input seeds happen to correspond well to the image structures represented by the first few eigenvectors, a very small should give good accuracy. However, if a user is trying to delineate more complex structures, a large may be required before these structures are represented well by the eigenvectors. Below we develop a strategy for efficiently determining the appropriate number of eigenvectors to use online, after user input seeds and prior probabilities become known.
IiiA1 User Seeds in RWIS
From (16), each column of the segmentation is a linear combination of the columns of . Recall is an matrix of ’s and ’s, with a single in each row, representing the probabilities of the seeded voxels. Intuitively, since the unseeded voxels are derived from the seeds (see (5)), if no linear combination of the eigenvectors gives a segmentation for the seeded voxels similar to the labeling dictated by , we conclude the segmentation specified by these seeds cannot be represented well by the current set of eigenvectors.
To make this rigorous, if we define such that , then we expect . We note the constraint
(23) 
since the components of are at most . If the normalized Laplacian is being used, should be replaced by . We now define the function measuring how well the current eigenvectors can represent the seeds for label :
(24) 
The minimization in (24) is over a convex function with a convex constraint, and is of size , so can be solved efficiently. We calculate for each to ensure each label is represented well by the eigenvectors, and if any are above a threshold , we add more eigenvectors and repeat the process. controls the tradeoff between speed and accuracy, and is thus application dependent, though since is roughly a measure of the expected squared error at each voxel, for a desired mean absolute error , one should set . We use .
IiiA2 Prior Probabilities in RWIR
The prior probabilities correspond to an unregularized probabilistic registration, so determining how well they can be represented by the eigenvectors is fairly straightforward. Since the columns of are orthonormal, we can project each of the columns of onto their span and calculate the magnitude of the residual:
(25) 
Since each label corresponds to a displacement vector, labels corresponding to similar displacement vectors often have similar probabilities, thus (25) need not be evaluated for every . To increase efficiency, we evaluate for one out of every displacement labels, uniformly spaced, where is the image dimension. If any of them are above a threshold , we add more eigenvectors. In this work, we use , so (25) is evaluated for labels in volumetric registration. Similar to , controls the tradeoff between speed and accuracy, and is roughly proportional to the voxelwise error in the probabilities. For a desired mean absolute error , one should set . We use . Note that we save the residual , so we are able to update (25) efficiently when more eigenvectors are added by projecting the residual onto the new eigenvectors.
Calculating can be relatively expensive, as it requires multiplications between matrices of length . In segmentation, the priors are sometimes available offline (e.g. expected range of HU units in CT for different tissue types); if the priors are derived from the user input seeds, calculating may not be worth the time. In registration, calculating the priors often constitutes a significant overhead, so calculating may be relatively insignificant.
IiiB Updating the Laplacian
The RW algorithm can be significantly affected by how the edge weights are calculated in (2), but the Laplacian must be fixed when precomputation is performed. Previously, if a user wanted to change the edge weights online, they had to abandon using fastRW. In this section, we propose a technique to update the precomputed data when the edge weights are changed. Specifically, we focus on changes in the key parameter .
One potential technique is to precompute eigenvectors for multiple Laplacians with different edge weights, though there are limitations to this technique: there’s no guarantee the desired Laplacian (i.e. for a specific ) will be precomputed offline, and the offline computational burden could limit the number of Laplacians that can be used. Instead, we make the observation that when is increased or decreased, the relative ordering of the edge weights doesn’t change, so small relaxed normalized cuts for one value of should also be small for other values of . Thus, while the eigenvectors of the Laplacian are different for different values of , they should correspond roughly to the same prominent image structures. This suggests we may be able to compute the eigenvectors for a certain and then reuse them for other ’s. The key is updating the eigenvalues.
The relaxed normalized cuts function in (18) is equivalent to the Rayleigh quotient:
(26) 
If v is an eigenvector of with corresponding eigenvalue , . Thus, the values on the diagonal of are not only the eigenvalues corresponding to the eigenvectors , but also their normalized cut values.
When is updated online, we construct the new normalized Laplacian , evaluate the new function (calculated using the offline Laplacian) at each column of , and replace the values on the diagonal of with the new normalized cut values. That is, for , we define as an diagonal matrix with elements given by
(27) 
, the pseudoinverse of , is then approximated by
(28) 
instead of using the actual eigenvectors of , as in (7). This weights each column of by how well it “cuts” the updated image graph, so columns that no longer correspond to good cuts will be largely ignored. While and can no longer be considered eigenvectors and eigenvalues, their use in the fast RW algorithm does not otherwise change. While this approximation is expected to get worse for further from , we mitigate this by precomputing eigenvectors for several different ’s and using (26
) to “interpolate” between them.
IiiC Aggregated Image Graphs
In RWIR, as with other discrete registration techniques, there has been recent work targeted at reducing the number of vertices in the image graph, and thus the computational cost of RWIR, by aggregating vertices into “supervertices”. While this aggregation has often been done in an image agnostic way, using grids of predefined resolutions, recent image dependent online techniques have proven successful [21, 23, 28]. In general, vertex aggregation is performed by defining a new set of vertices , and a “projection function” , where and , encoding the influence of each supervertex on vertex when propagating values from the sparser graph to the denser graph. The corresponding “aggregation function” is defined as , encoding the influence of each vertex on supervertex when propagating values from the denser graph to the sparser graph. We require these functions be normalized:
(29) 
Once RW is solved on the aggregated graph, the solution is propagated back to the original vertices. Denoting probabilities assigned to as , the corresponding probabilities for vertex are given by
(30) 
Note that for a given , often for most , with vertices only dependent on a few nearby supervertices.
Vertex aggregation performed online invalidates the eigenvectors precomputed from the original graph. We cannot apply the technique from Sec. IIIB directly, since the number of vertices have changed, so the normalized cut values cannot be computed for the columns of . We thus develop a technique for aggregating the columns of to the supervertices.
Let be one of the columns of , and let be the (undetermined) aggregation of q to the supervertices. The most straightforward technique for calculating would be to use directly:
(31) 
However, using (31) may not respect the property that should have a small normalized cut value on the aggregate graph. A supervertex may “consume” some edges, if the vertices on both ends of the edge are assigned exclusively to . The influence of consumed edges should not be considered when calculating , as they have no effect on the normalized cut of the aggregate graph. Consider a vertex with all of its edges consumed; was based exclusively on those edges, and thus should not be considered when performing aggregation.
To account for this, instead of weighting components of q just by , we also weight them by the local change in , since the edges between vertices assigned to different supervertices will still affect the normalized cut value:
(32)  
(33) 
We note that the vectors generated by (33
) will not form an orthonormal set, but are made so using the GramSchmidt process. Combining the orthonormal vectors into a matrix
and then calculating their corresponding Ncut values allows fastRW to be run on the aggregate graph with minimal overhead and loss in accuracy.We focus on fastRWIR for graph aggregation as we found the aggregation step was typically too expensive compared to the fastRWIS runtime, along with the possibility of the aggregated supervoxels crossing weak image boundaries and reducing segmentation accuracy.
Iv Results
In this section, we test the effectiveness of the techniques presented in Sec. III. Since fastRW is an approximation to the basic RW method, our evaluations focus on measuring the loss in accuracy compared to the increase in runtime when using fastRW (with and without the adaptive techniques presented here) instead of basic RW. Ideally, only a small fraction of the overall accuracy will be lost when using fastRW. Note that “accuracy” will be defined by comparing to manual segmentation results using the Dice similarity coefficient (DSC) in segmentation and mean overlap (MO) in registration [29]. For two images segmented into foreground regions given by and ,
(34) 
Probabilistic segmentations converted to nonprobabilistic segmentations by thresholding and probabilistic registrations are converted to nonprobabilistic registrations by taking the expected displacement at each voxel.
Iva Setup
We use two D images and data sets of volumetric images each. The D images consist of a CT cardiac image slice, manually segmented into regions, and a blood cell image, manually segmented into regions (blood cell and background). D data allows a large number of tests with different seed locations. The cardiac image slice is used because it provides a multilabel segmentation and the blood cell image is used because it requires prior probabilities calculated from the user seeds to segment every cell (Fig. 2).
The first volumetric data set consists of T1MR images of thighs, manually segmented into regions. For segmentation, we combine some regions, resulting in a label segmentation (muscle, fat, cortical bone, bone marrow, and background, see Fig. 2, Fig. 4).
The second volumetric data set consists of T1MR brain images from the LONI dataset [29], manually segmented into regions, skull stripped, and with their intensity histograms normalized (see Fig. 4).
When testing segmentation, we use the D images and the thigh images, with seeds generated automatically using the manual segmentations. For the cardiac image slice, we randomly place seeds in each region, and with the blood cell image, we randomly place seeds inside a single blood cell and seeds nearby in the background. We generate different sets of seeds for each image, with the single blood cell chosen randomly each time. For the thigh images, we randomly place seeds each in the muscle, fat, and background regions, and seeds each in the two bone regions. We generate sets of seeds each for the images. All of the segmentation results in this section are averaged over each set of seeds. Different techniques for sampling seeds would lead to different segmentation accuracies, but as our goal is to compare different segmentation methods, it is sufficient to ensure they use the same sets of seeds. Unless otherwise stated, we use , for RWIS on cardiac and thigh images, and for RWIS on blood cell images. The prior probabilities are calculated by fitting a Gaussian to the intensity values of the seeds for each region.
When testing registration, we use the D thigh and LONI brain images, registering each pair of images together for tests total. The local similarity between two voxels is evaluated using the sum of absolute intensity differences in a patch of size . We use displacement labels for the thigh images and for the brain images. displacement labels for The prior probabilities for displacement vectors are calculated as the negative exponential of the squared local patch difference, normalized to sum to at each voxel. We use , for RWIR.
Experiments use unoptimized MATLAB code run on a machine with Quad Core Intel Xeon GHz CPUs.
IvB Determining the Number of Eigenvectors
IvB1 User Seeds
First, we evaluate our technique from Sec. IIIA1 for choosing the number of eigenvectors based on the seed locations (see (24)). Our goal is to show we can adaptively and automatically choose when segmenting real medical images and achieve results comparable to the results for the best fixed . For the D images, we precompute eigenvectors, and run RWIS and fastRWIS using . For the D thigh images, we precompute eigenvectors, and run RWIS and fastRWIS using . We then run fastRWIS with chosen using our adaptive technique for each image. The accuracy and runtime of each segmentation algorithm is seen in Fig. 3, with example segmentations shown in Fig. 2.
We see that our technique for choosing online provides accuracy comparable to using all the eigenvectors while running, on average, significantly faster. We emphasize that while each image seems to have an “optimal” number of eigenvectors which gives similar results to our technique in terms of runtime and accuracy, this number is not known ahead of time and is different for each image class ( for cardiac, for blood cells, and for thigh). Our technique requires no prior knowledge to achieve this accuracy, and further, provides slightly lower runtime than any fixed number of eigenvectors, due to cases where fewer eigenvectors are used. Clearly the performance of fastRWIS is highly dependent on the number of eigenvectors, yet choosing an appropriate number online would be guesswork without our technique.
IvB2 Prior Probabilities
Second, we evaluate our technique from Sec. IIIA2 for choosing the number of eigenvectors based on the prior probabilities (see (25)). Again, our goal is to show we can adaptively choose when registering a pair of and achieve results comparable to the best fixed . For each pair of images, we choose one randomly as the moving image and precompute eigenvectors. We run RWIR, fastRWIR with , and fastRWIR with chosen using our adaptive technique. The accuracy and the runtime of each registration algorithm is seen in Fig. 5, with example registrations show in Fig. 4. Similar to segmentation, our technique allows fastRWIR to achieve an average accuracy and runtime comparable to the best fixed number of eigenvectors, but without any prior knowledge.
Data  Algorithm  Accuracy  RunTime (sec)  

Cardiac  RWIS  
fastRWIS  
Adaptive fastRWIS  
Blood Cell  RWIS  
fastRWIS  
Adaptive fastRWIS  
Thigh  RWIS  
fastRWIS  
Adaptive fastRWIS  
RWIR  
fastRWIR  
Adaptive fastRWIR  
Brain  RWIR  
fastRWIR  
Adaptive fastRWIR 
IvC Updating the Laplacian
We evaluate our technique from Sec. IIIB (updating the precomputed values when the Laplacian edge weights are changed) by precomputing eigenvectors/values for an offline Laplacian constructed with , then running fastRW using an online Laplacian with and a updated using (26). We note that doubling corresponds to squaring the weights of all the edges (see (2)). As a baseline, we compare the results to fastRWIS run using eigenvector/values calculated from the online Laplacian directly. We precompute eigenvectors for the D images and for the thigh images. The accuracy of each technique is shown in Fig. 6. We see that when is not known offline, our technique for updating achieves very similar accuracy compared to when is known offline.
IvD Aggregated Image Graphs
In this section, we evaluate our technique for aggregating the eigenvectors into supervertices, given by (33) in Sec. IIIC. We use eigenvectors for fastRWIR. We aggregate vertices based on their prior probabilities and spatial proximity [23] (see Fig. 7), and thus the aggregation must be performed online, after the precomputation. For each pair of thigh and brain images, we run registration schemes: basic RWIR, fastRWIR with the eigenvectors aggregated using the naïve technique in (31), fastRWIR with the eigenvectors aggregated using our proposed technique in (33), and using eigenvectors calculated from the Laplacian of the aggregated graph directly (used only as a baseline, since the aggregated graph is not known offline). The accuracy and runtime of each technique is seen in Fig. 8. Our proposed technique performs significantly better than the naïve technique, and only slightly worse than the eigenvectors calculated directly from the aggregate graph.
V Discussion and Conclusion
The main drawback of fastRW algorithms is that certain algorithmic decisions must be made offline, when there is incomplete information. We have proposed multiple extensions that significantly improve the online flexibility of fastRW algorithms, while only incurring a minor overhead in runtime and implementation complexity. Since the fastRW paradigm already achieves results very similar to the popular RWIS and RWIR algorithms, removing its limitations constitutes a major improvement to RW algorithms in general. Further, while the techniques presented here were only demonstrated individually, there is nothing to preclude combining them to further increase the robustness of the online algorithm to the offline settings.
When deciding how many eigenvectors to compute offline, some applications may be limited by offline time available and by memory constraints. For the D results presented here, offline computation typically took between and seconds per eigenvector, for total runtimes of less than minutes. For the D results, offline computation typically took between to minutes per eigenvector, and thus the offline runtimes could be on the order of several days for large images. In these cases, the fastRW algorithms presented here may not be useful, dependent on the time between image acquisition and analysis.
Another concern is the memory constraints imposed by using a thousand eigenvectors for D images. Fortunately, the techniques presented do not require all of the eigenvectors to be in memory simultaneously, but rather allow them to be loaded into memory sequentially. This can be used to control the memory usage at a small cost in runtime.
While the results shown here demonstrate the potential effectiveness of our techniques, the popularity of the RW algorithm has lead to other interesting and potentially powerful extensions that are not discussed here, such as the inclusion of learnt shape priors and automatic seeding for RWIS [30, 31] and adaptive displacement labels for RWIR [22]. Further, the RW algorithm was shown by Couprie et al. to be part of a family of graphbased algorithms [5], including other popular algorithms such as Graph Cuts [4]. Random walks have also been applied to other imaging tasks, such as stereo matching [32] and shape correspondence [33]. Our future work will focus on extending our precomputation techniques to the extensions of the RW algorithm, similar graphbased algorithms, and other imaging applications mentioned above.
References
 [1] S. Olabarriaga and A. Smeulders, “Interaction in the segmentation of medical images: A survey,” MedIA, vol. 5, no. 2, pp. 127–142, 2001.
 [2] D. Freedman and T. Zhang, “Interactive graph cut based segmentation with shape priors,” in CVPR, vol. 1, 2005, pp. 755–762.
 [3] L. Tang, G. Hamarneh, and K. Iniewski, “Medical image registration: a review,” Medical Imaging: Technology and Applications, pp. 619–660, 2013.
 [4] Y. Boykov and M. Jolly, “Interactive graph cuts for optimal boundary & region segmentation of objects in nd images,” ICCV, vol. 1, pp. 105–112, 2001.
 [5] C. Couprie, L. Grady, L. Najman, and H. Talbot, “Power watershed: A unifying graphbased optimization framework,” IEEE TPAMI, vol. 33, no. 7, pp. 1384–1399, 2011.
 [6] H. Ishikawa, “Higherorder clique reduction in binary graph cut,” CVPR, pp. 2993–3000, 2009.
 [7] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE TPAMI, vol. 22, no. 8, pp. 888–905, 2000.
 [8] A. Delong and Y. Boykov, “Globally optimal segmentation of multiregion objects,” ICCV, pp. 285–292, 2009.
 [9] L. Grady, “Random walks for image segmentation,” IEEE TPAMI, vol. 28, no. 11, pp. 1768–1783, 2006.
 [10] J. Udupa and G. Grevera, “Go digital, go fuzzy,” Pattern Recognition Letters, vol. 23, no. 6, pp. 743–754, 2002.
 [11] A. Saad, G. Hamarneh, and T. Moller, “Exploration and visualization of segmentation uncertainty using shape and appearance prior information,” Visualization and Computer Graphics, vol. 16, no. 6, pp. 1366–1375, 2010.

[12]
A. Top, G. Hamarneh, and R. Abugharbieh, “Active learning for interactive 3D image segmentation,”
MICCAI, pp. 603–610, 2011.  [13] A. X. Falcão, J. K. Udupa, S. Samarasekera, S. Sharma, B. E. Hirsch, and R. d. A. Lotufo, “Usersteered image segmentation paradigms: Live wire and live lane,” Graphical models and image processing, vol. 60, no. 4, pp. 233–260, 1998.
 [14] L. Grady and A. Sinop, “Fast approximate random walker segmentation using eigenvector precomputation,” CVPR, pp. 1–8, 2008.
 [15] S. Andrews, G. Hamarneh, and A. Saad, “Fast random walker with priors using precomputation for interactive medical image segmentation,” in MICCAI, vol. 6363, 2010, pp. 9–16.
 [16] L. Grady, “Multilabel random walker image segmentation using prior models,” CVPR, vol. 1, pp. 763–770, 2005.

[17]
B. Glocker, N. Komodakis, G. Tziritas, N. Navab, and N. Paragios, “Dense image registration through MRFs and efficient linear programming,”
MedIA, vol. 12, no. 6, pp. 731–741, 2008.  [18] T. Tang and A. Chung, “Nonrigid image registration using graphcuts,” in MICCAI, 2007, pp. 916–924.
 [19] M. P. Heinrich, M. Jenkinson, M. Brady, and J. A. Schnabel, “Globally optimal deformable registration on a minimum spanning tree using dense displacement sampling,” in MICCAI. Springer, 2012, pp. 115–122.
 [20] D. Cobzas and A. Sen, “Random walks for deformable image registration,” MICCAI, pp. 557–565, 2011.
 [21] K. Popuri, D. Cobzas, and M. Jägersand, “A variational formulation for discrete registration,” in MICCAI, 2013, pp. 187–194.
 [22] L. Y. Tang and G. Hamarneh, “Random walks with efficient search and contextually adapted image similarity for deformable registration,” in MICCAI, 2013, pp. 43–50.
 [23] L. Tang and G. Hamarneh, “Random walker image registration with cost aggregation,” in IEEE ISBI, 2014, pp. 576–579.
 [24] S. Andrews, N. Changizi, and G. Hamarneh, “The isometric logratio transform for probabilistic multilabel anatomical shape representation,” 2014, tMI, accepted.

[25]
T. Lotfi, L. Tang, S. Andrews, and G. Hamarneh, “Improving probabilistic image registration via reinforcement learning and uncertainty evaluation,”
Machine Learning in Medical Imaging, pp. 187–194, 2013.  [26] S. Andrews, L. Tang, and G. Hamarneh, “Fast random walker image registration using precomputation,” in IEEE ISBI, 2014, pp. 189–192.
 [27] F. R. Chung, Spectral graph theory. American Mathematical Soc., 1997, vol. 92.
 [28] S. Parisot, W. Wells III, S. Chemouny, H. Duffau, and N. Paragios, “Concurrent tumor segmentation and registration with uncertaintybased sparse nonuniform graphs,” MedIA, vol. 18, no. 4, pp. 647–659, 2014.
 [29] A. Klein et al., “Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration,” Neuroimage, vol. 46, no. 3, pp. 786–802, 2009.
 [30] P.Y. Baudin, N. Azzabou, P. G. Carlier, N. Paragios et al., “Manifoldenhanced segmentation through random walks on linear subspace priors,” in British Machine Vision Conference, 2012.
 [31] P.Y. Baudin, N. Azzabou, P. G. Carlier, and N. Paragios, “Automatic skeletal muscle segmentation through random walks and graphbased seed placement,” in IEEE ISBI, 2012, pp. 1036–1039.
 [32] R. Shen, I. Cheng, X. Li, and A. Basu, “Stereo matching using random walks,” in IEEE ICPR, 2008, pp. 1–4.
 [33] C. Oh, B. Ham, and K. Sohn, “Probabilistic correspondence matching using random walk with restart.” in BMVC, 2012, pp. 1–10.
Comments
There are no comments yet.