1 Introduction
Image interpolation is a fundamental problem encountered in many fields [1, 2, 3, 4, 5, 6, 7, 8, 9]. There are countless scenarios wherein images are acquired at resolutions that are suboptimal for the needs of specific applications. For example, biomedical images spanning a threedimensional (3D) volume are often acquired with inplane pixel spacings far less than the outofplane spacings between images. This can be the case with clinical images (e.g., from computed tomography (CT) and/or magnetic resonance (MR) imaging) as well as in vitro images acquired with modalities such as particle image velocimetry (PIV) [10, 11, 12, 13, 14, 15, 16, 17]
. In cases where motion estimation and registration are parts of an interpolation framework, hardware based approaches can offer solutions as well
[18, 19, 20, 21, 22, 23, 24].However, when acquired images are comprised of signals that describe the flow velocity of fluids, additional information is available to guide the interpolation process. Specifically, the flows of an incompressible fluid into and out of an interrogation volume must be equal according to conservation of mass [25]. Quantifying the deviation from zero net flow that is entering (or alternatively leaving) an interrogation volume (i.e., divergence) thus provides a means to direct interpolation in such a way as to reconstruct more physically accurate data.
Optical flow and/or other registrationbased interpolators have proven useful in interpolating velocimetry data in the past [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37]. Particle Image Velocimetry (PIV) is a technique that measures a velocity field in a fluid volume with the help of tracer particles in the fluid and specialized cameras [38, 39]. The default technique to determine the velocity field from the raw PIV data is a correlation analysis between two frames that were acquired by the cameras [40]
. This technique can be extended to 3D as well. Optical flowbased approaches have been widely used in computer vision
[41, 42, 43, 44], and they have been appealing to researchers because of the flexibility of variational approaches. Regularizers can be used for different constraints in the energy functional to be minimized. In the conventional optical flow method there are two constraints, brightness and smoothness [45]. Optical flowbased methods have been promising in the area of fluid flow estimation in PIV [46, 47, 48, 49, 50, 51]. For example, in [47], incompressibility of the flow is added as a constraint in the optical flow minimization problem. In [48], the vorticity transport equation, which describes the evolution of the fluid’s vorticity over time, is used in physically consistent spatiotemporal regularization to estimate fluid motion.Divergence and curl (vorticity) have been used in estimating optical flow previously [52, 53, 54, 55]. In [52], the smoothness constraint is decomposed into two parts, divergence and vorticity, in this way, the smoothness properties of the optical flow can be tuned. In [56], both incompressibility and divergencefree constraints are used in the illposed minimization problem to calculate a 3D velocity field from 3D Cine CT images. In [54], a second order divcurl spline smoothness condition is employed in order to compute a 3D motion field. In [55], a data term based on the continuity equation of fluid mechanics [25] and a second order divcurl regularizer are employed to calculate fluid flow.
Here we present an opticalflow based framework for image interpolation that also minimizes resultant divergence in the interpolated data. That is, the divergence constraint attempts to minimize divergence in interpolated velocimetry data, not the divergence of the optical flow field. To our knowledge, using divergence in this way as a constraint in an opticalflow framework for image interpolation has not been investigated prior to the preliminary work presented in [57]. The method is applied to PIV, computational fluid dynamics (CFD), and analytical data and results indicate that the tradeoff between minimizing errors in velocity magnitude values and errors in divergence can be managed such that both are decreased below levels observed for standard truncated sinc functionbased interpolators, as well as pure optical flowbased interpolators. The proposed method thus has potential to provide an improved basis for interpolating velocimetry data in applications where isotropic flow velocity volumes are desirable, but outofplane data (i.e., data in different images spanning a 3D volume) can not be resolved as highly as inplane data.
The remainder of this paper is structured as follows. In section 2, a definition of the term optical flow will be given and a canonical optical flow method will be briefly described. This will provide a basis for the following sections as most of the work described in this paper has been built on the described method. In section 2, an optical flowbased framework for interpolating minimally divergent velocimetry data is described. The new method uses flow velocity data to guide the interpolation toward lesser divergence in the interpolated data. In section 3, performance of the proposed technique is presented with experiments and simulations on real and analytical data. The results and performance of the proposed method are discussed and concluded in section 4.
2 Methods
2.1 Optical Flow
Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. In one canonical optical flow paper [45], two kinds of constraints are introduced in order to estimate the optical flow: the smoothness constraint and the brightness constancy constraint. In this section, we give a brief overview of the original optical flow algorithm (HornSchunck method) and the modified algorithm that was used in this project.
Optical flow methods estimate the motion between two consecutive image frames that were acquired at times and
. A flow vector for every pixel is calculated. The vectors represent approximations of image motion that are based in large part on local spatial derivatives. Since the flow velocity has two components, two constraints are needed to solve for it.
2.1.1 The Brightness Constancy Constraint
The brightness constancy constraint assumes that the brightness of a small area in the image remains constant as the area moves from image to image. Image brightness at the point () in the image at time is denoted here as . If the point moves by and in time , then according to the brightness constancy constraint:
(1) 
This can also be stated as:
(2) 
If we expand the left side of Eq. 2 with a Taylor series expansion, then:
(3) 
where the ellipsis (…) denotes higher order terms in the expansion. After canceling from both sides of the equation:
(4) 
We can divide this equation by , which leads to:
(5) 
Substituting:
the brightness constraint can be written in a more compact form:
(6) 
where , , and . In this form and represent the image velocity components and () represents the brightness gradients.
2.1.2 The Smoothness Constraint
Fortunately, points from an object that is imaged in temporally adjacent frames usually have similar velocities, which results in a smooth velocity field. Leveraging this property, we can express a reasonable smoothness constraint by minimizing the sums of squares of the Laplacians of the velocity components and . The Laplacians are:
(7a)  
(7b) 
2.1.3 Minimization
Optical flow assumes constant brightness and smooth velocity over the whole image. The two constraints described above are used to formulate an energy functional to be minimized:
(8) 
Using variational calculus, the EulerLagrange equations can be determined for this problem. Those equations need to be solved for each pixel in the image. Iterative methods are suitable to solve the equations since it can be very costly to solve them simultaneously. The iterative equations that minimize Eq. 8 are:
(9a)  
(9b) 
where n denotes the iteration number and and denote neighborhood averages of and . More detailed information on the method can be found in [45].
2.2 Optical Flow with Divergence Constraint
2.2.1 Continuity Equation
According to the continuity equation in fluid dynamics, the rate of mass entering a system is equal to the rate of the mass leaving the system [25]. The differential form of the equation is:
(10) 
where is the fluid density, is time and is the velocity vector field. In the case of incompressible flow, becomes constant and the continuity equation takes the form:
(11) 
This means that the divergence of the velocity field is zero in the case of incompressible flow. Figure 1 shows the change in flow velocity of a voxel.
2.2.2 Symmetric Setup
For the new method, a symmetric interpolation setup is proposed as shown in Figure 2. In the figure, upper and lower slices are from the dataset and the interpolated slice is in the middle.
(12) 
In this section, denotes the velocity magnitude image and denotes the velocity vector components (i.e., ,,). If one approximates the expressions with Taylor expansion around the points , we get:
(13a)  
(13b) 
After substituting Eqs. 13a and 13b into Eq. 12, terms can be arranged to obtain the new brightness constraint:
(14) 
In the next step, we aim to minimize the divergence of the interpolated slice. Ideally, the divergence equation of the interpolated slice should be used:
(15) 
Since this information is unavailable, to generate the middle slice with as little divergence as possible, we can use the fact that:
(16) 
which leads to the following constraint by using the divergence expressions of the two outer slices, and :
(17) 
Using Taylor expansion on Eq. 17 yields:
(18) 
In Eq. 18, we need the derivatives of and in the zdirection. Calculating these derivatives in the zdirection would require additional outer slices. To simplify this requirement, we can expand and around the points and obtain the following,
(19a)  
(19b) 
Using Eqs. 19a and 19b in Eq. 17, we obtain the new divergence constraint that doesn’t require additional slices for the zdirection derivative,
(20) 
Combining Eqs. 14, 20 and the optical flow smoothness constraint, we obtain the new energy functional that needs to be minimized,
(21) 
where
Using variational calculus, the EulerLagrange equations can be determined for this problem. They need to be solved for each pixel in the image. The iterative equations that minimize the solutions are given by,
(22a)  
(22b) 
where denotes the iteration number and and denote neighborhood averages of and . The coefficient expressions in Eqs. 22a and 22b are given as
The numerical scheme to solve the EulerLagrange equations is based on the solution laid out in [45]. More detailed information on the steps of the derivation can be found in the appendix.
There have been several studies that attempt to improve the performance of optical flow techniques and computation schemes [58, 59, 60, 61, 62, 63, 64, 65, 66]. For example, in [60] nonlinear convex penalty functions are used for the constraints in the optical flow energy functional. The approach uses numerical approximations to obtain a sparse linear system of equations from the highly nonlinear EulerLagrange equations. The resulting linear system of equations can be solved with numerical methods like GaussSeidel, which is similar to Jacobi method, or successive overrelaxation (SOR), which is a GaussSeidel variant. Another improvement to variational optical flow computation is presented in [61]. The approach uses a multigrid numerical optimization method and because of its speedup gains, it can be used in realtime. After all these advances, in [58], it was argued that the typical formulation of optical flow has changed little and most of the advances have been mainly numerical optimization and implementation techniques, and robustness functions. This is also true for the proposed method as well. The optical flow portion of this interpolation framework is closely related to the HornSchunck method. The derived numerical scheme to solve the equations enhances this notion while its implementation is straightforward and simple. For example, setting the divergence coefficient to in Eqs. 22a and 22b reduces the solutions to HornSchunck solutions. The numerical scheme is also sufficient for velocimetry data because unlike in many other types of images, stark discontinuities are unexpected in velocimetry images at Reynolds numbers on the order of biomedical flows.
2.3 PIV Setup
The testing datasets were acquired using particle image velocimetry, an optical experimental flow measurement technique. PIV data acquisition and processing generally consists of the following steps: (1) computational modeling, (2) physical model construction, (3) particle image acquisition, (4) PIV processing, and (5) data analysis. The testing datasets were acquired for an invitro model of a cerebral aneurysm. Patientspecific computed tomography (CT) images were first segmented and reconstructed to obtain the computational cerebral aneurysm model as shown in Figure 3. The computational model was then translated into an optically clear, rigid urethane model using a lostcore manufacturing methodology. The physical model was connected to a flow loop consisting of a blood analog solution seeded with 8 m fluorescent microspheres. Fluid flow through the physical model was controlled at specific flow rates (3, 4 and 5 mL/s). PIV was performed using a FlowMaster 3D Stereo PIV system (LaVision, Ypsilanti, MI), where the fluorescent particles were illuminated with a 532 nm dualpulsed Nd:YAG laser at a controlled rate, while two CCD cameras captured the images across seven parallel planes (or slices) within the aneurysmal volume. A distance of 1 mm separated the planes. Two hundred image pairs, at each flow rate and slice, were acquired at 5 Hz. The image pairs were processed using a recursive crosscorrelation algorithm using Davis software (LaVision, Ypsilanti, MI) to calculate the velocity vectors within region of interest (i.e., the aneurysm). Initial and final interrogation window sizes of 32 by 32 pixels and 16 by 16 pixels, respectively, were used. Detailed explanation of the experimental process can be found in [67]. A sample experimental model is shown in Figure 4.
The proposed algorithm was developed in MATLAB (Mathworks, Inc). Since the proposed algorithm has two separate terms for divergence and smoothness, different combinations of coefficients can be used for the terms. However, in order to get a clear idea about the performance of the method only one set of parameters were used in the simulations. The divergence term’s coefficient was set to 150. From previous tests, it was seen that the proposed method performed better when a relatively large was used while keeping the smoothness coefficient small. The smoothness coefficient
was set to 1. The same smoothness coefficient was also used for the HornSchunck based method. The iterations for both methods were set to 2000. Each PIV dataset used in testing had 7 slices. The slices were originally 154x121. They were cropped and zeropadded to reach 128x128. The size of the region where MSE and divergence were calculated is 110x110. Even though there are 7 slices in each dataset, only 3 slices were reconstructed from the datasets. These are slices 3, 4 and 5. Two different spacing steps were used between the slices. The first one is
z=2 where the neighboring slices z1 and z+1 were used to reconstruct the middle slice. The second one is z=4 where slices z2 and z+2 were used for the interpolation, e.g., slices 1 and 5 were used to reconstruct slice 3. The method was tested against linear interpolation and an implementation of HornSchunck optical flow based interpolation.2.4 Analytical Datasets
The method was tested with a 3D divergencefree analytical dataset and a CFD data set with turbulent flow. The analytical dataset is given below.
(23a)  
(23b)  
(23c) 
Outofplane distance was kept much higher than the inplane resolution. In order to assess the robustness of the proposed method, each velocity field was perturbed by Gaussian noise. The noise had zero mean and standard deviation of 10% of the maximum velocity in each velocity field.
2.5 Computational Fluid Dynamics (CFD) Simulations
The original computational aneurysm model was imported into ANSYS ICEM (ANSYS, Canonsburg, PA), where the inlet and outlets of the aneurysm model were extruded. After meshing was performed to discretize blood volumes into tetrahedrons, the final mesh was imported into ANSYS Fluent where the blood volume was modeled as an incompressible fluid with the same density and viscosity as the blood analog solution used in experiments. The vessel wall was assumed to be rigid, and a noslip boundary condition was applied at the walls. A steady flat 4ml/s flow profile was applied at the inlet of the model, and zero pressure boundary conditions were imposed at the outlets. The overall CFD approach has been described previously in [67, 68].
3 Results
Figure 5 shows divergence and MSE comparison graphs when z=2. The proposed method consistently achieves lower divergence values than the HornSchunckbased interpolation whereas the MSE values vary between better and worse values. On average, divergence values were 11% lower than the HornSchunckbased interpolation. In some cases, the proposed method achieves up to 20% lower divergence values.
Figure 6 shows divergence and MSE comparison graphs when z=4. In this case, the proposed method consistently achieves lower divergence and MSE values than the other tested methods.
Figure 7 shows original, noisy, and interpolated slices from the analytical dataset for comparison. In the figure, only and components were plotted to show the effect of the divergence term.
In Fig. 8, it can be seen that the proposed algorithm reduces divergence while the MSE is increased in the CFD dataset.
The graphs in Figure 9 show the behavior of the proposed method as the divergence coefficient increases linearly. In this simulation, the smoothness coefficient was kept constant (=1). The graphs are taken from the PIV dataset. The divergence graph profiles were consistent across different images and three datasets. The MSE graph profiles may differ slightly from the divergence graph profiles across different datasets, but MSE always increased with increasing . The coefficient values tested were from 0 to 2000. The profiles shown in the figure show that there needs to be a balance between the divergence and smoothing terms. The graphs in the figure are consistent with profiles of other published based regularization methods [69, 70]. Figure 10 shows the behavior of the proposed method as and increase linearly.
The computational cost of obtaining flow vectors with the proposed method is similar to that of the HornSchunck approach. Even though the iterative solutions of the proposed method employ several terms, these need to be computed only once and can be reduced to a simpler form that is similar to the HornSchunck solutions. Both approaches took around 0.1 seconds to obtain an optical flow field on a single core of an Intel dual core CPU (i76500U @ 2.50GHz).
Another parameter that could affect the divergence, MSE, and computational cost was the iteration number. We ran simulations with different iteration numbers and noted that the divergence and MSE results seem to stabilize after 200 iterations. Higher iteration number mostly had an effect on the computation time.
4 Discussion and Conclusions
A new optical flowbased framework for image interpolation that also reduces divergence is proposed. The new method uses flow velocity data to guide the interpolation toward lesser divergence in the interpolated data. In addition to the symmetric interpolation setup, the method introduces a new divergence term into the canonical optical flow method. The method is applied to PIV, analytical, and CFD data. The method was tested against linear interpolation and the HornSchunck optical flow method since it uses a similar formulation as the HornSchunck method. The proposed method applies a symmetric interpolation setup and considers a new divergence term in addition to the brightness and smoothness terms in the energy functional.
In order to test the effects of the divergence term, both the HornSchunck and proposed methods were subject to the same smoothness coefficient. When tested on the noisy analytical data, the proposed method achieved a smoother and less noisy interpolated velocity field.
The proposed method was also applied to the PIV data with different values of smoothness and divergence term coefficients, and , respectively. Results indicate that the tradeoff between minimizing errors in velocity magnitude values and errors in divergence can be managed such that both are decreased below levels observed for standard truncated sinc functionbased interpolators as well as pure optical flowbased interpolators. The divergence term coefficient, , needs to be large enough to reduce divergence in the interpolated data but not so large as to dominate the energy functional and introduce errors into the final interpolated velocity field. The effect of the iteration number on the divergence and MSE numbers was found to be minimal after 200 iterations. The computational cost of the method was similar to that of the HornSchunck based approach.
The method uses a numerical scheme that is wellknown and straightforward. It’s true that a more recent optical flow computation scheme may lead to performance gains in quality and/or speedup. Methods presented in [60] and [61] have become popular because of their speed, simplicity, and flexibility. Adoptation of recent numerical optimization and implementation techniques will be explored for future research.
The proposed method has potential to improve the interpolation of velocimetry data when it’s difficult achieve an outofplane resolution close to the inplane resolution. The results also indicate that the effect of the new divergence term in the optical flow functional can be appreciated better as the distance between the interpolated slice and the neighboring slices increases. It was noted that the proposed method outperforms the tested methods in both divergence and MSE values when the slice distance was increased. When the slice distance is small, the proposed method achieves lower divergence than the other methods while achieving similar MSE values.
Data Availability
The PIV data used to support the findings of this study are available from the corresponding author upon request. Sample code can be found at https://github.com/berkgithub/OF_interp.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Funding Statement
This work was supported by the National Science Foundation [1512553].
Acknowledgments
A portion of this work was presented as an abstract at the 2017 IEEE 14^{th} International Symposium on Biomedical Imaging (ISBI 2017).
Appendix A Appendix
(24) 
This can be minimized by solving the associated EulerLagrange equations.
where L is the integrand of the energy functional.
(25) 
After rearranging the terms, we get:
approximating the Laplacians of and ,
where is a proportionality constant and, and are local averages. These approximations are substituted for Laplacians and the terms in the equation are rearranged.
Determinants can be used to solve the above equations.
References

[1]
O. Oktay, W. Bai, M. Lee, R. Guerrero, K. Kamnitsas, J. Caballero,
A. de Marvao, S. Cook, D. O’Regan, and D. Rueckert,
“Multiinput cardiac image superresolution using convolutional neural networks,”
in Medical Image Computing and ComputerAssisted Intervention – MICCAI 2016. 2016, pp. 246–254, Springer International Publishing. 
[2]
Y. Liu, X. Chen, Z. Wang, Z.J. Wang, R.K. Ward, and X. Wang,
“Deep learning for pixellevel image fusion: Recent advances and future prospects,”
Information Fusion, vol. 42, pp. 158–173, 2018.  [3] R.S. Alves and J.M.R.S. Tavares, “Computer image registration techniques applied to nuclear medicine images,” in Computational and Experimental Biomedical Sciences: Methods and Applications. 2015, pp. 173–191, Springer International Publishing.
 [4] W. Witwit, Y. Zhao, K.W. Jenkins, and Y. Zhao, “Satellite image resolution enhancement using discrete wavelet transform and new edgedirected interpolation,” Journal of Electronic Imaging, vol. 26, 2017.
 [5] L. Roszkowiak, A. Korzynska, J. Zak, D. Pijanowska, Z. SwiderskaChadaj, and T. Markiewicz, “Survey: interpolation methods for whole slide image processing,” Journal of Microscopy, vol. 265, pp. 148–158, 2017.
 [6] J. Titus and S. Geroge, “A comparison study on different interpolation methods based on satellite images,” IJERT, vol. 2, June 2013.
 [7] K.K. Teoh, H. Ibrahim, and S.K. Bejo, “Investigation on several basic interpolation methods for the use in remote sensing application,” in Proc. IEEE CITISIA. IEEE, July 2008.
 [8] S.C. Park, M.K. Park, and M.G. Kang, “Superresolution image reconstruction: a technical overview,” IEEE Signal Process. Mag., vol. 20, May 2003.
 [9] T.M. Lehman, C. Gonner, and K. Spitzer, “Survey: Interpolation methods in medical image processing,” IEEE Trans. Med. Imag., vol. 18, pp. 291–294, Nov 1999.
 [10] G. Liberman, E. Solomon, M. Lustig, and L. Frydman, “Multiplecoil kspace interpolation enhances resolution in singleshot spatiotemporal mri,” Magnetic Resonance in Medicine, vol. 79, no. 2, pp. 796–805, 2018.
 [11] N. Karani, C. Tanner, S. Kozerke, and E. Konukoglu, “Reducing navigators in freebreathing abdominal mri via temporal interpolation using convolutional neural networks,” IEEE Trans. Med. Imag., 2018.
 [12] Y. Chen, Y. Xie, Z. Zhou, F. Shi, A. G. Christodoulou, and D. Li, “Brain mri super resolution using 3d deep densely connected neural networks,” in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), April 2018, pp. 739–742.
 [13] C. Pham, A. Ducournau, R. Fablet, and F. Rousseau, “Brain mri superresolution using deep 3d convolutional networks,” in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), April 2017, pp. 197–200.
 [14] Q. Lin, Q. Zhang, and L. Tongbin, “Slice interpolation in mri using a decompositionreconstruction method,” in 2017 4th International Conference on Information Science and Control Engineering (ICISCE), July 2017, pp. 678–681.
 [15] N.H.L.C de Hoon, A.C. Jalba, E. Eisemann, and A. Vilanova, “Temporal interpolation of 4d pcmri bloodflow measurements using bidirectional physicsbased fluid simulation,” in Eurographics Workshop on Visual Computing for Biology and Medicine. 2016, The Eurographics Association.
 [16] C. Ozturk, J.A. Derbyshire, and E.R.M. McVeigh, “Estimating motion from mri data,” in Proc. IEEE Inst. Electr. Electron Eng. IEEE, Oct 2003, pp. 1627–1648.
 [17] D.H. Frakes, C. Conrad, T. Healy, J.W. Monaco, M.J.T. Smith, M. Fogel, S. Sharma, and A.P. Yoganathan, “Application of an adaptive control grid interpolation technique to morphological vascular reconstruction,” IEEE Trans. Biomed. Eng., vol. 50, pp. 197–206, 2003.
 [18] G. Pastuszak and M. Trochimiuk, “Architecture design of the highthroughput compensator and interpolator for the h.265/hevc encoder,” Journal of RealTime Image Processing, vol. 11, no. 4, pp. 663–673, Apr 2016.
 [19] G. Botella, A. Garcia, M. RodriguezAlvarez, E. Ros, U. MeyerBaese, and M. C. Molina, “Robust bioinspired architecture for opticalflow computation,” IEEE Trans. VLSI Syst., vol. 18, no. 4, pp. 616–629, April 2010.
 [20] G. Botella, U. MeyerBaese, A. García, and M. Rodríguez, “Quantization analysis and enhancement of a vlsi gradientbased motion estimation architecture,” Digital Signal Processing, vol. 22, no. 6, pp. 1174 – 1187, 2012.
 [21] D. González, G. Botella, C. García, M. Prieto, and F. Tirado, “Acceleration of blockmatching algorithms using a custom instructionbased paradigm on a nios ii microprocessor,” EURASIP Journal on Advances in Signal Processing, vol. 2013, no. 1, pp. 118, Jun 2013.
 [22] A.H. Nguyen, M.R. Pickering, and A. Lambert, “The fpga implementation of a onebitperpixel image registration algorithm,” Journal of RealTime Image Processing, vol. 11, no. 4, pp. 799–815, Apr 2016.
 [23] Z. Xinchen, A.I. Haojun, H. Ruimin, and L. Deren, “A novel algorithm for subpixel block motion estimation [video compression applications],” in Proceedings of 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing, 2004., Oct 2004, pp. 587–590.
 [24] D. Li, W. Zheng, and M. Zhang, “Architecture design for h.264/avc integer motion estimation with minimum memory bandwidth,” IEEE Transactions on Consumer Electronics, vol. 53, no. 3, pp. 1053–1060, Aug 2007.
 [25] J. Pedlosky, Geophysical fluid dynamics, Springer, 1987.
 [26] G. Głomb, G. Świrniak, and J. Mroczka, “An optical flowbased method for velocity field of fluid flow estimation,” in Proc. SPIE 10329, Optical Measurement Systems for Industrial Inspection X, 2017, vol. 10329.
 [27] A. Baghaie and Z. Yu, “Curvaturebased registration for slice interpolation of medical images,” in Computational Modeling of Objects Presented in Images. Fundamentals, Methods, and Applications. 2014, pp. 69–80, Springer International Publishing.
 [28] A. This, L. BoilevinKayl, H.G. Morales, O. Bonnefous, P. Allain, M.A. Fernández, and J.F. Gerbeau, “One mesh to rule them all: Registrationbased personalized cardiac flow simulations,” in Functional Imaging and Modelling of the Heart. 2017, pp. 441–449, Springer International Publishing.
 [29] G. Głomb and G. Świrniak, “Particle image models for optical flowbased velocity field estimation in image velocimetry,” in Proc. SPIE 10679, Optics, Photonics, and Digital Technologies for Imaging Applications V, 2018, vol. 10679.
 [30] D.H. Frakes, K. Pekkan, L.P. Dasi, H.D. Kitajima, D.de Zelicourt, H.L. Leo, J. Carberry, K. Sundareswaran, H. Simon, and A.P. Yoganathan, “Modified control grid interpolation for the volumetric reconstruction of fluid flows,” Exp. Fluids, vol. 45, pp. 987–997, Dec 2008.
 [31] G.P. Penney, J.A. Schnabel, D. Rueckert, M.A. Viergever, and W.J. Niessen, “Registrationbased interpolation,” IEEE Trans. Med. Imag., vol. 23, pp. 922–926, 2004.

[32]
L.D.C. Casa and P.S. Krueger,
“Radial basis function interpolation of unstructured, threedimensional, volumetric particle tracking velocimetry data,”
Measurement Science and Technology, vol. 24, no. 6, pp. 065304, 2013.  [33] F. Brunet, E. Cid, A. Bartoli, E. Bouche, F. Risso, and V. Roig, “Image registration algorithm for molecular tagging velocimetry applied to unsteady flow in heleshaw cell,” Experimental Thermal and Fluid Science, vol. 44, pp. 897–904, 2013.
 [34] C.J. Elkins and M.T. Alley, “Magnetic resonance velocimetry: applications of magnetic resonance imaging in the measurement of fluid motion,” Exp. Fluids, vol. 43, no. 6, pp. 823–858, 2007.
 [35] D.H. Frakes, M.J.T. Smith, D. de Zelicourt, K. Pekkan, and A.P. Yoganathan, “Threedimensional velocity field reconstruction,” J. Biomech. Eng., vol. 126, pp. 727–735, 2004.
 [36] D.P. Melnikov and V.M. Shevtsova, “Liquid particles tracing in three dimensional buoyancy driven flows,” Fluid Dyn. Mater. Process, vol. 1, pp. 189–199, 2005.
 [37] D. Heitz, E. Mémin, and C. Schnörr, “Variational fluid flow measurements from image sequences: synopsis and perspectives,” Experiments in Fluids, vol. 48, no. 3, pp. 369–393, 2010.
 [38] M. Raffel, C. Willert, and J. Kompenhans, Particle Image Velocimetry: A Practical Guide, Springer, 2013.
 [39] R. J. Adrian and J. Westerweel, Particle Image Velocimetry, Cambridge University Press, 2011.
 [40] F. Scarano, “Iterative image deformation methods in piv,” Measurement Science and Technology, vol. 13, no. 1, pp. R1–R19, 2002.
 [41] D. Fortun, P. Bouthemy, and C. Kervrann, “Optical flow modeling and computation: A survey,” Computer Vision and Image Understanding, vol. 134, pp. 1–21, 2015.
 [42] D. Sun, S. Roth, and M.J. Black, “A quantitative analysis of current practices in optical flow estimation and the principles behind them,” International Journal of Computer Vision, vol. 106, no. 2, pp. 115–137, Jan 2014.
 [43] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in The IEEE International Conference on Computer Vision (ICCV), December 2015.

[44]
G. Aubert and P. Kornprobst,
Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations
, Springer, 2006.  [45] B.K.P. Horn and B.G. Schunck, “Determining optical flow,” A.I. Memo 572, Massachusetts Institute of Technology, Apr 1980.
 [46] L. Alvarez, C. Castaño, M. García, K. Krissian, L. Mazorra, A. Salgado, and J. Sánchez, “A variational approach for 3d motion estimation of incompressible piv flows,” in Scale Space and Variational Methods in Computer Vision, Fiorella Sgallari, Almerico Murli, and Nikos Paragios, Eds., Berlin, Heidelberg, 2007, pp. 837–847, Springer Berlin Heidelberg.
 [47] L. Alvarez, C.A. Castaño, M. García, K. Krissian, L. Mazorra, A. Salgado, and J. Sánchez, “A new energybased method for 3d motion estimation of incompressible piv flows,” Computer Vision and Image Understanding, vol. 113, no. 7, pp. 802 – 810, 2009.
 [48] P. Ruhnau, A. Stahl, and C. Schnörr, “Variational estimation of experimental fluid flows with physicsbased spatiotemporal regularization,” Measurement Science and Technology, vol. 18, no. 3, pp. 755, 2007.
 [49] P. Ruhnau and C. Schnörr, “Optical stokes flow estimation: an imagingbased control approach,” Experiments in Fluids, vol. 42, no. 1, pp. 61–78, Jan 2007.
 [50] I. Herlin, D. Béréziat, N. Mercier, and S. Zhuk, “Divergencefree motion estimation,” in Computer Vision – ECCV 2012, Andrew Fitzgibbon, Svetlana Lazebnik, Pietro Perona, Yoichi Sato, and Cordelia Schmid, Eds., Berlin, Heidelberg, 2012, pp. 15–27, Springer Berlin Heidelberg.
 [51] Q. Zhong, H. Yang, and Z. Yin, “An optical flow algorithm based on gradient constancy assumption for piv image processing,” Measurement Science and Technology, vol. 28, no. 5, pp. 055208, 2017.

[52]
D. Suter,
“Motion estimation and vector splines,”
in
1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition
, 1994, pp. 939–942.  [53] S. N. Gupta and J. L. Prince, “Stochastic models for divcurl optical flow methods,” IEEE Signal Processing Letters, vol. 3, no. 2, pp. 32–34, 1996.
 [54] S. N. Gupta and J. L. Prince, “On divcurl regularization for motion estimation in 3d volumetric imaging,” in Proceedings of 3rd IEEE International Conference on Image Processing, 1996, vol. 1, pp. 929 – 932.
 [55] T. Corpetti, D. Heitz, G. Arroyo, E. Mémin, and A. SantaCruz, “Fluid experimental flow estimation based on an opticalflow scheme,” Experiments in Fluids, vol. 40, no. 1, pp. 80–97, Jan 2006.
 [56] S. M. Song and R. Leahy, “Computation of 3d velocity fields from 3d cine ct images of a human heart,” IEEE Trans. Med. Imag., vol. 10, no. 3, pp. 295 – 306, 1991.
 [57] B. Kanberoglu, P. Nair, and D. Frakes, “An optical flowbased approach for the interpolation of minimally divergent velocimetry data,” in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), 2017, pp. 538–542.
 [58] D. Sun, S. Roth, and M.J. Black, “Secrets of optical flow estimation and their principles,” 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2432–2439, 2010.
 [59] G. Aubert and P. Kornprobst, Mathematical problems in image processing: Partial differential equations and the calculus of variations, SpringerVerlag New York, 2006.
 [60] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Computer Vision  ECCV 2004, T. Pajdla and J. Matas, Eds., Berlin, Heidelberg, 2004, pp. 25–36, Springer Berlin Heidelberg.
 [61] A. Bruhn, J. Weickert, T. Kohlberger, and C. Schnörr, “A multigrid platform for realtime motion computation with discontinuitypreserving variational methods,” International Journal of Computer Vision, vol. 70, no. 3, pp. 257–277, Dec 2006.
 [62] A. Bruhn, J. Weickert, and C. Schnörr, “Lucas/kanade meets horn/schunck: Combining local and global optic flow methods,” International Journal of Computer Vision, vol. 61, no. 3, pp. 211–231, Feb 2005.
 [63] A. Bruhn, J. Weickert, C. Feddern, T. Kohlberger, and C. Schnorr, “Variational optical flow computation in real time,” IEEE Trans. Image Process., vol. 14, no. 5, pp. 608–615, May 2005.
 [64] P. Gwosdek, H. Zimmer, S. Grewenig, A. Bruhn, and J. Weickert, “A highly efficient gpu implementation for variational optic flow based on the eulerlagrange framework,” in Trends and Topics in Computer Vision, K.N. Kutulakos, Ed. 2012, pp. 372–383, Springer Berlin Heidelberg.

[65]
H.A. Rashwan, M.A. García, and D. Puig,
“Variational optical flow estimation based on stick tensor voting,”
IEEE Trans. Image Process., vol. 22, no. 7, pp. 2589–2599, July 2013.  [66] D. Fortun, P. Bouthemy, and C. Kervrann, “Optical flow modeling and computation: A survey,” Computer Vision and Image Understanding, vol. 134, pp. 1–21, 2015.
 [67] B.N. Roszelle, P. Nair, L.F. Gonzalez, M.H. Babiker, J. Ryan, and D. Frakes, “Comparison among different high porosity stent configurations: Hemodynamic effects of treatment in a large cerebral aneurysm,” J. Biomech. Eng., vol. 136, 2014.
 [68] M.H. Babiker, L.F. Gonzalez, J. Ryan, F. Albuquerque, D. Collins, A. Elvikis, and D. Frakes, “Influence of stent configuration on cerebral aneurysm fluid dynamics,” J. Biomech. Eng., vol. 45, pp. 440–447, 2011.
 [69] C.B. Shaw and P.K. Yalavarthy, “Effective contrast recovery in rapid dynamic nearinfrared diffuse optical tomography using normbased linear image reconstruction method,” J. Biomed. Opt., vol. 17, 2012.
 [70] C. Habermehl, J. Steinbrink, K.R. Muller, and S. Haufe, “Optimizing the regularization for image reconstruction of cerebral diffuse optical tomography,” J. Biomed. Opt., vol. 19, 2014.
Comments
There are no comments yet.