Neural Particle Image Velocimetry

01/28/2021 ∙ by Nikolay Stulov, et al. ∙ The University of Arizona Skoltech 0

In the past decades, great progress has been made in the field of optical and particle-based measurement techniques for experimental analysis of fluid flows. Particle Image Velocimetry (PIV) technique is widely used to identify flow parameters from time-consecutive snapshots of particles injected into the fluid. The computation is performed as post-processing of the experimental data via proximity measure between particles in frames of reference. However, the post-processing step becomes problematic as the motility and density of the particles increases, since the data emerges in extreme rates and volumes. Moreover, existing algorithms for PIV either provide sparse estimations of the flow or require large computational time frame preventing from on-line use. The goal of this manuscript is therefore to develop an accurate on-line algorithm for estimation of the fine-grained velocity field from PIV data. As the data constitutes a pair of images, we employ computer vision methods to solve the problem. In this work, we introduce a convolutional neural network adapted to the problem, namely Volumetric Correspondence Network (VCN) which was recently proposed for the end-to-end optical flow estimation in computer vision. The network is thoroughly trained and tested on a dataset containing both synthetic and real flow data. Experimental results are analyzed and compared to that of conventional methods as well as other recently introduced methods based on neural networks. Our analysis indicates that the proposed approach provides improved efficiency also keeping accuracy on par with other state-of-the-art methods in the field. We also verify through a-posteriori tests that our newly constructed VCN schemes are reproducing well physically relevant statistics of velocity and velocity gradients.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


In the past decades, great progress has been made in the field of optical and particle-based measurement techniques for non-intrusive turbulent flow monitoring. Techniques like Particle Image Velocimetry (PIV) [Adrian1984] and Particle Tracking Velocimetry (PTV) [Adamczyk and Rimai1988] are widely used to identify flow parameters from time-consecutive snapshots of particles injected in the flow.

In this work we consider the following PIV experimental setting:

  • Many small particles not affecting the flow but advected by the flow are injected.

  • The flow domain is illuminated with laser rays to highlight the particles.

  • Successive snapshots of the the continuous optical field, including highlighted particles, are recorded by a high-speed camera.

  • The recorded data is processed to extract information about the flow.

See [Raffel et al.2018, Adrian, Adrian, and Westerweel2011] for review of the state-of-the-art in PIV.

Consider the example of a stationary incompressible velocity field, , in a finite domain of the two-dimensional space, , such that . Assume that multiple particles are injected and advected by the flow. We observe particles in the form of an image, which can be considered as a scalar density field , where is time. We assume that effect of molecular diffusion on the density field, controlled by the diffusion coefficient, , is significantly weaker than the effect of advection. In this setting the following advection-diffusion equation governs the spatio-temporal dynamics of the density field:


The PIV problem becomes: given a pair or sequence of successive images , to learn the spatial distribution of . This post-processing step, that is, estimation of from a pair or sequence of snapshots, sets up the main challenge in PIV.

Two families of methods to approach the challenge are documented in the literature: the Cross-Correlation (CC) based [Goodman2005] and the Optical Flow (OF) based [Horn and Schunck1981] methods. In fact the two methodologies are complementary, in the sense that CC-based methods are advantageous in terms of the computational efficiency, but produce a rather coarse velocity, while OF-based result in a much better resolved velocity field, but lack efficiency. See [Liu et al.2015] reviewing applications of the families of CC-based and OF-based methods to the PIV setting in fluid mechanics.

Our Contributions

We propose to view density field as a passive scalar, that is a scalar advected by the velocity but not influencing the velocity field. Our PIV task becomes, given consecutive snapshots of the passive scalar (the density field) to reconstruct the velocity.

Density of particles in the PIV data resolved at the pixel level is discontinuous making velocity reconstruction problematic. We resolve the problem coarsening images with a hybrid of CC- and OF-based methods utilizing Convolutional Neural Networks (CNN).

The essence of this work is in adapting to PIV and experimenting with the most recent and advanced VCN architecture, developed for non-physical stereo matching problem. The network is thoroughly trained and tested on a dataset containing both synthetic and real flow data. Experimental results are analyzed and compared to conventional PIV methods as well as to aforementioned modern PIV methods developed recently. This analysis indicates that the proposed approach delivers gain in efficiency while keeping the accuracy on par with the state-of-the-art in PIV.

The material in the manuscript is organized as follows: state of the art in the field is reviewed in Section Related Work, which includes (as Subsections) description of Conventional PIV methods and Deep Learning for PIV. We discuss data source we use in the manuscript for training and validations in the Section with aforementioned name. Our Methodology and Experiments and Results are described in the following two major Sections of the manuscript. We conclude with the Section devoted to Conclusions and Path Forward.

State of the Art

Conventional PIV methods

CC-based methods have been around for more than twenty years [Goodman2005] and, as such, are extensively studied. The main gist of the method consists in reconstructing correspondences between parts of the two consecutive images by running computations on pairs of interrogation windows (patches) of the two consecutive images separated by a unit of time. The computation on patches is as simple as sum of product of intensities of the two patches, which is referred to as cross-correlation operation further. For a pair of -channel, size- patches, , centered at and in the first and second images respectively, the cross-correlation operation is defines as


where denotes the channel-wise dot-product.

For each patch of the first image, all patches in the second image are considered, each located at unique displacement . Through computing cross-correlations between each pair, an array of values, called cost volume , is obtained. Then, an optimal displacement for location is extracted by locating the argmaximum of cost volume :


This operation is repeated for all associated with locations of patches in the first image. The collected displacements per unit time constitute the predicted velocity at the position . To aid the method, images are warped towards each other according to current flow predictor.

An exemplary CC method is a window deformation iterative multi-grid (WIDIM) method [Scarano2001], which has shown good performance in the International PIV Challenges [Stanislas, Okamoto, and Kähler2003, Stanislas et al.2005, Stanislas et al.2008].

The optical flow (OF) method, developed by Horn and Schunck in [Horn and Schunck1981], relies on solving a computationally more demanding but also more accurate global optimization


where the objective is the integrated norm of the mismatch between the left hand side and the right hand side in the brightness change constraint equation (BCCE), which is essentially the advection-diffusion equation (1) with the diffusion coefficient, , replaced by the regularization coefficient, .

If BCCE is satisfied, then under mild assumptions imposed on the sequence of images the functional in Eq. 4 is strictly convex and thus has a unique global optimal point.

To avoid aliasing and improve robustness to the assumption violations, the first step becomes to compute the velocity field at a coarse level by using only low spatial frequency components. The second step is to compensate the reconstructed motion by warping the images towards each other to obtain . Then, the higher spatial frequencies are used to estimate a correction optical flow on the warped sequence from Eq. 5, which results in a refined optical flow estimate. This process should be repeated sequentially for finer spatial scales until the original resolution is reached.


The original Horn and Schunck (HS) formulation has been extended and optimized in [Ruhnau et al.2005]. See also [Heitz, Mémin, and Schnörr2010] for a comprehensive discussion of the OF methods’ state-of-the-art.

Notice that both OF and CC methods were introduced originally for the problem in the field of computer vision for fictitious velocity flow connecting two consecutive images in a movie-type data stream. This problem is named optical flow or stereo matching, and one can view PIV, aimed at reconstructing actual velocity from consecutive images of particles injected in a fluid flow, as a special case of it. The specialization of the PIV problem is due to special constraints related to physical conditions of the actual fluid mechanics experiment. For example, the velocity field may possess a special type/degree of compressibility, or it may show a special multi-scale structure, etc.

Deep Learning for PIV

Notice that neither basic OF nor basic CC provide a satisfactory balance of efficiency and accuracy for stereo matching. It was reported recently, that a novel Deep Learning based extension of the methods is capable to fill this niche. The core idea behind these methods (expressed casually here) consists in incorporating a cross-correlation module (see Eq. 2

) and then refining its coarse output through interpolation and regularization modules inspired by the one of the OF-based methods and subject to adjustment (learning). It was made possible by the release of open-source labeled stereo motion estimation datasets

[Menze and Geiger2015, Scharstein et al.2014, Butler et al.2012].

In [Dosovitskiy et al.2015], the authors propose two CNN architectures, FlowNetC and FlowNetS. [Ilg et al.2017] improves on the result using a stacked architecture called FlowNet2. A different direction was taken in [Hui, Tang, and Change Loy2018], proposing a lightweight yet powerful architecture named LiteFlowNet. Finally, the most recent result, namely Volumetric Correspondence Network (VCN) [Yang and Ramanan2019], sets a new state-of-the-art in the field by handling the cross-correlation operation result through volumetric 4D convolutions.

Deep Learning methodology was first time applied to the PIV problem in [Rabault, Kolaas, and Jensen2017]. The next milestone in PIV was achieved in [Lee, Yang, and Yin2017], which combined cross-correlation techniques with the power of Deep Learning. Some of the above-mentioned methods were also successfully adapted to the PIV problem. For example, the authors of [Cai et al.2019b] built a PIV estimator based on FlowNetS architecture, and the authors of [Cai et al.2019a] developed and extended it.

To recap, CNN-based methods provide a dense (per-pixel) motion field similar to OF-based methods at the efficiency of CC-based methods (10-100 ms per image pair). Even though training the model is time consuming, once the network is trained, it can be used for real-time estimation. This feature is very much on demand in fluid mechanics, as it allows real-time (online) monitoring and even active flow control instead of storing and post-processing currently dominating realm of the PIV experiments.


In this work, we choose to experiment largely with synthetic and experimental data available in open, as well as with the synthetic data we generate ourselves.

First, we utilize synthetic data assembled in [Cai et al.2019b], which was generated by mimicking a conventional PIV data-collection procedure as follows. The velocity field is generated or obtained from a database. Consecutive images of the density field are generated by seeding particles homogeneously at random and smoothing the particle representation in the image according to the Gaussian kernel 6. Parameters of the kernel are selected randomly from pre-defined ranges described in Tab. 1. Then, particles are advected by the velocity both forward and backward in time to generate synthetic images. The overall procedure is illustrated in Fig. 1.

Parameter Range Unit
Seeding density 0.05 - 0.1 ppp
Particle diameter 1 - 4 pixel
Peak intensity 200 - 255 grey value
Location 1 - 256 pixel
Table 1: Range of parameter in Eq. 6.
Figure 1: Particle generation procedure from [Cai et al.2019b]. First, particle positions are sampled (a) and the velocity field is prepared (b). Then, particles are advected by the flow in the forward and backward direction to obtain two snapshots.

An example of an image and the velocity field projecting first snapshot of the density field to the second snapshot are shown in Fig. 2.

Figure 2: Left: an exemplary density distribution. Right: respective color-coded velocity field.

The velocity field is obtained from five principal sources described in detail in Table 2. In the table, we denote our simulation of random incompressible flow as ours, computational fluid dynamics (CFD) simulation for two flow cases as uniform, back-step and cylinder, 2D turbulent flow motion simulation as DNS-turbulence, flow simulation of a surface quasi-geostrophic model as SQG and data from the John Hopkins Turbulence Database as JHTDB.

We generate our random flow using PhiFlow [Holl, Koltun, and Thuerey2020]

by taking random per-pixel amplitudes and phases in Fourier domain, applying a high-pass filter, smoothing and converting the resulting spectrum to velocity field with inverse Fourier transform. After that, a pressure solver is used to zero out the divergence. This allows us to model an incompressible time-independent multi-scale flow.

The displacements of particles did not exceed 10 pixels in the entire dataset. The use of volumetric convolutions gives premise for adaptation to real data from this limited displacement range.

The overall dataset size is around 15 thousand entries of image pairs and corresponding flows. We use every fifth image pair for testing and the rest for training.

Name Description Condition Size
Ours Random incompressible flow - ?
Uniform Uniform flow 1000
Back-step Backward stepping flow Re = 800 600
Re = 1000 600
Re = 1200 1000
Re = 1500 1000
Cylinder Flow over a circular cylinder Re = 40 50
Re = 150 500
Re = 200 500
Re = 300 500
Re = 400 500
DNS-turbulence Homogeneous isotropic turbulent flow - 2000
SQG SQG sea surface flow - 1500
JHTDB-channel Channel flow provided by JHTDB - 1600
JHTDB-mhd Forced MHD turbulence provided by JHTDB - 800
JHTDB-isotropic Forced isotropic turbulence provided by JHTDB - 2000
Table 2: Description of the data extracted from the John Hopkins Turbulence Database [JHTDB].

Our Methodology

Let us remind that we consider the PIV problem as the problem of finding (inferring) the velocity field from a pair of filtered images by computing a matching between them, analogous to CC-based methods. A standard filter can be used, however a learned convolutional filter is advantageous as providing greater flexibility. A sequential coarse-to-fine framework, equivalent to one of the methods accumulated within the OF-based methods, yields a spatially well-resolved prediction. In order to assist the network in flow refinement, at every level the first feature map is warped towards the second one with the velocity field obtained from the previous step.

VCN improves on the common principal approach described above via several novelties drafted below. We refer the reader to the original VCN paper [Yang and Ramanan2019] for details. The full operational pipeline of VCN is illustrated in Fig. 3.

Figure 3: VCN pipeline.

In the recent literature, the originally 4D cost volume is reshaped into 3D by suppressing the two displacement dimensions into one, and processed with 2D convolutions. VCN operates on the original 4D cost volume and processes it with 4D convolutions instead. It allows for volumetric invariance and generalizes well to new unseen displacements. The additional computational burden is resolved with the use of separable 4D convolutions. Application of the 4D filter to the cost volume reduces to respective 2D spatial filter application and 2D WTA filter application.

Another improvement of VCN is in the cost volume itself. Instead of choosing between the restricted form of the traditional cost volume [Chang and Chen2018] and computationally prohibitive feature form of the cost volume [Kendall et al.2017], VCN introduces a compromise multi-channel cost volume. Thus, instead of a convolution over features in Eq. 2

, a feature-wise cosine similarity is used. The final prediction is computed by filtering the hypotheses with channel-wise convolutional modules and weighting results with the soft-max distribution.

We denote the original VCN model, trained for PIV problem, as PIV-VCN. We discover, through extensive experimentation, that the PIV-VCN provides fast and accurate predictions outperforming the other models error-wise. However, visually, the predicted velocity field still has plenty of room for improvement, especially at the finer scales and mostly localized in the areas of large vorticity.

We conjecture that the caveat in the quality of the velocity field reconstruction is due to the fact that VCN seeks for an outcome resolved only at a quarter of the original image. We verify the hypothesis by adding another iteration of refinement and regularization, so that the prediction of the model is half the resolution of the original image. The refined model is coined PIV-VCN-en in the following.

Experiments and Results

Experimental setup

Similar to PWC-Net and LiteFlowNet [Hui, Tang, and Change Loy2018]

, VCN uses pyramidal feature extraction through a modified PSPNet

[Zhao et al.2017] with a total of 6 levels. Each level is equipped with a coarse-to-fine framework and feature warping, shown in Fig. 3

. In PIV-VCN-en, we add additional 7th level at the end. The rest of the hyperparameters for the layers (strides, kernel sizes, hidden layer sizes, number of hypothesis) are set according to the original VCN paper.

Unlike LiteFlowNet, the model is trained simultaneously, using learning rate scheduling with restarts.

As mentioned before, we use 80% of the dataset for training and 20% for testing. Moreover, random scale, rotation and translation augmentations are applied to boost the network performance. We found, however, that scaling augmentation prevents the network from learning sub-pixel fine effects of the flow. Thus, we gradually decrease the power of scaling augmentation throughout training.

The network is built and trained in Pytorch. The loss function is a sum of the

norm contributions correspondent to different scales, each evaluating mismatch between respective ground truth and predicted flows.


We utilize Root Mean Squared Error (RMSE) metric:


to verify quality of the predictions. We also use Squared Error or metric to output pixel-wise error in the Figures presenting our results.


Qualitative assessment

We show results of the PIV-VCN model error analysis by plotting the squared error of the reconstructed images in Fig. 4.

Figure 4: Errors of PIV-VCN model. Left to right, top to bottom: back-step, cylinder, DNS turbulence, JHTDB-channel, SQG, uniform. Darker pixels indicate larger error. For illustration purposes, contrast was enhanced.

For PIV-VCN model, minor differences between the predicted velocity field and the respective ground truth are visible, especially in the areas of vortexes. However and by enlarge, the overall quality of estimation is good. Moreover, in the case of the IV-VCN-en model the predicted flow matches the ground truth even better (almost perfectly).

Quantitative assessment

We perform evaluation of the VCN model on the test data and compare the results to other methods in the Table 3. From these estimations, we conclude that PIV-VCN model is on-par or better than competitors in most cases, with the only notable exception of the Uniform flow. We conjecture that the exception is due to the fact that errors in the coarse flow are magnified by the interpolation procedure. The conjecture is also supported by the improvement seen in PIV-VCN-en. We conclude that PIV-VCN model is best suited for capturing complex flows.

Model Uniform Back-step Cylinder Channel DNS-turbulence SQG
WIDIM 0.017 0.034 0.083 0.084 0.304 0.457
HS OF 0.022 0.045 0.070 0.069 0.135 0.156
PIV-DCNN 0.018 0.049 0.100 0.117 0.334 0.479
PIV-FlowNetS 0.126 0.139 0.194 0.237 0.525 0.525
PIV-FlowNetS-en 0.059 0.072 0.115 0.155 0.282 0.294
PIV-LiteFlowNet 0.054 0.056 0.083 0.104 0.196 0.202
PIV-LiteFlowNet-en 0.026 0.033 0.049 0.075 0.122 0.126
PIV-VCN 0.265 0.024 0.025 0.085 0.099 0.103
PIV-VCN-en 0.036 0.014 0.017 0.039 0.060 0.067
Table 3: RMSE in pixels on PIV dataset. Bold font highlights the best results.

Another important feature of the PIV algorithm is the computational efficiency of the inference step, expressed in the time required for inference shown in Table 4. We observe that PIV-VCN models are among the top-performing models in terms of efficiency and are capable of online computations. Although PIV-VCN-en model is twice as slow as PIV-VCN, it should be noted that it is much more accurate. Let us also note that the speedup of around 20ms for VCN model is reported in [Yang and Ramanan2019] by implementing the cross-correlation operation in CUDA, which we did not yet implement in our experiments.

VCN-en VCN LiteFlowNet-en FlowNetS-en HS OF WIDIM
Inference time 0.140 0.075 0.044 0.010 1.075 0.422
Table 4: Inference time (in seconds) on PIV dataset.

Physics-Informed Diagnostics

Let us now test how our newly developed ML schemes reproduce physical correlations characteristic of turbulent flows. We continue to work with the ground truth data for velocity fields, , correspondent to various cases of developed turbulence from the Johns Hopkins Turbulence Data Base [Li et al.2008, Perlman et al.2007], and apply the physics-informed diagnostics of reported in [King et al.2018]

to the Machine Learning scheme discussed above. These basic physics-informed tests include

  • Testing the degree of the velocity compressibility, i.e. deviation of

    from zero (applicable to the cases where the flow we learn is incompressible). This test may also be extended to statistics of the velocity gradient tensor,

    (and not only to its diagonal component controlling incompressibility).

  • Energy Spectra, , where is the spatial Fourier transform of , over the range of scales and respective -harmonics. ( stands for spatial, over snapshot, and temporal, over many snapshots, averaging.)

  • Testing dependence of the -th order velocity structure function, , on the order and on the scale . Here, is assumed scanning through the entire inertial range of turbulence (ranging from the smallest, viscous scale all the way to the largest, energy containing scale).

  • Testing statistics of the velocity gradient tensor, , coarse-grained over a scale from the inertial range of turbulence (i.e. spatially convoluted with a kernel, typically of Gaussian shape, of size ). Specifically, we usually aim to reconstruct statistics of the second, , and third, , invariants of the coarse-grained velocity tensor.

We report results of the first three tests below.

These tests of velocity statistics can also be extended to verify statistics of the density field, . We will not present results of the density statistics diagnostics in this paper, however describe the tests below for completeness.

  • Density spectra, , where is the spatial Fourier transform of , over the range of scales and respective -harmonics.

  • Intermittency of density (typically much stronger pronounced than intermittency of velocity) testing scaling of the density increments, , as functions of and , where scans scales from the inertial range of turbulence.

  • Statistics of the density gradient vector,

    , with a special focus on the tails of the probability distribution.

It is also of interest to test some mixed objects, for example

  • Mean flux of density fluctuations, , as a function of

    , and respective higher order moments (statistics/intermittency) of the flux.

Notice that the diagnostic is focused on evaluating simultaneous (the same time) statistics of velocity and density. Even richer tests can be designed to analyze temporal, i.e. different time, correlations in Eulerian (not moving with the flow) and Lagrangian (evolving with the flow) frames.

The extended diagnostics of turbulence is expected to be useful not only for a-posteriori test of the Machine Learning schemes but also for enforcing expected physical correlations a-priori. We envision incorporating (in the future work) some of the physics-informed regularizations to the loss function. This approach should help to minimize deviations of such physically-significant statistical characteristics which show the largest mismatch in the a-posteriori tests.

In the rest of the Section we give more details on testing the a-posteriori diagnostics. We present results of the four tests of the simultaneous velocity statistics described above. The tests are applied to five data sets from the JHU turbulence database correspondent to a back-step flow (shown in Fig. 5), a cylinder flow (shown in Fig. 6), a channel flow, (shown in Fig. 7), an exemplary homogeneous isotropic (compressible) turbulence flow (shown in Fig. 8) and the surface quasi geostrophic flow (shown in Fig. 9).

Power Spectrum Test

We work with a standard 2D Discrete Fourier Transform of the flow spatial snapshot and then average it over the wave number orientation. The resulting power spectrum is shown in the log-log scale against the wave number. The wave number range extends from the largest spatial scales of turbulence (smallest wave numbers), correspondent to the energy containing scales, to the smallest scale (largest wave numbers) correspondent to the viscous (Kolmogorov) scale.

Divergence Test

We utilize the discrete differences (Sobel operator) to compute the velocity Jacobian at each pixel of a snapshot and then use the result to obtain the divergence . We collect statistics of the divergence averaging over pixel and then compare to the ground truth results with respective outcome of collecting statics over pixels of the trained snapshot. We also compute and show statistics of the velocity gradient, . This is done only for the ground truth data to set up a comparative scale for the divergence of the velocity. Notice, that of all the examples shown only one, correspondent to the Cylinder flow, was actually 2d divergence-free. (Majority of the JHU database examples are divergence free, however in three dimensions. Therefore, working with 2d projections we see the flows as 2d-compressible.)

Structure Function Tests

We perform two structure function (average velocity increment) tests. First, we consider the second order structure function analyzed as the function of the spatial scale. (Notice, that the test is actually a -space version of the -space energy spectrum test above.) Then, we set the scale to three pixels (roughly correspondent to the viscous scale) and analyze how structure function of different orders scale with . In each of the tests we average within a snapshot over the initial position, , of the pair of points central position and then over orientations of the radius vector connecting the two points.

Our results show that the ML schemes introduced in this manuscript reproduce statistics of the physically significant characteristics well. Moreover, performance of the algorithms in terms of passing the physical tests seems better than these discussed in [King et al.2018]. (Notice that this preliminary conclusion will need to be scrutinized in more quantitative future tests.) We attribute this good performance to the fact that the new ML scheme utilizes sufficient amount of physical information through the scalar - velocity relation.

Figure 5: Backward-stepping flow.
Figure 6: Cylinder flow.
Figure 7: Channel flow.
Figure 8: DNS turbulence flow.
Figure 9: SQG flow.

Conclusion and Path Forward


In this manuscript, we propose a Deep Learning estimator for PIV, that is able to extract multi-scale velocity field from consecutive PIV images. Our method is an adaptation of the Volumetric Correspondence Network approach, developed in [Yang and Ramanan2019], to the PIV setting/data. The resulting model, PIV-VCN-en, is examined in a series of experiments and thoroughly compared to both conventional CC-based and OF-based methods, WIDIM and HS respectively, and to other Deep Learning-based methods suggested for PIV recently.

The CC-based method lacks estimation accuracy and spatial resolution, while the computation of the OF-based approach is time-consuming. The proposed approach leverages advantages of both methods. In contrast to WIDIM method, our PIV-VCN-en method provides a much better spatial resolution. Moreover, it is advantageous to the HS method in its computational efficiency. Improvement is also achieved in comparison with the PIV-LiteFlowNet-en approach considered to be the current state-of-the-art in the field.

Finally, we showed that our newly designed Deep Learning PIV estimators, PIV-VCN and PIV-VCN-en, pass an advanced physics-informed turbulence diagnostic tests with flying colors.

Path forward

Although, as of now, the proposed model does not outperform some state-of-the-art methods in all cases, we intend to continue this work and achieve the goal of winning the competition across the board in the future. However, what is already demonstrated by our analysis is that there is a tremendous opportunity for practical improvements when we take advantage of the very fast pace of developments of the optical flow methods of the computer vision. We envision that when fully developed and validated our methods will allow integrated solutions for on-line, that is real-time, PIV processing. (See, e.g. [Yu et al.2006, Iriarte Munoz et al.2009, Varon, Adler, and Eulalie2019] and references there in, for discussions of some early attempts and challenges of using PIV in real-time, in particular for active flow control.) To accomplish this most ambitious goal will require collection of additional experimental data and further developments and tests.


  • [Adamczyk and Rimai1988] Adamczyk, A., and Rimai, L. 1988. 2-dimensional particle tracking velocimetry (ptv): technique and image processing algorithms. Experiments in fluids 6(6):373–380.
  • [Adrian, Adrian, and Westerweel2011] Adrian, L.; Adrian, R. J.; and Westerweel, J. 2011. Particle image velocimetry. Number 30. Cambridge university press.
  • [Adrian1984] Adrian, R. J. 1984. Scattering particle characteristics and their effect on pulsed laser measurements of fluid flow: speckle velocimetry vs particle image velocimetry. Applied optics 23(11):1690–1691.
  • [Butler et al.2012] Butler, D. J.; Wulff, J.; Stanley, G. B.; and Black, M. J. 2012. A naturalistic open source movie for optical flow evaluation. In A. Fitzgibbon et al. (Eds.)., ed., European Conf. on Computer Vision (ECCV), Part IV, LNCS 7577, 611–625. Springer-Verlag.
  • [Cai et al.2019a] Cai, S.; Liang, J.; Gao, Q.; Xu, C.; and Wei, R. 2019a. Particle image velocimetry based on a deep learning motion estimator. IEEE Transactions on Instrumentation and Measurement.
  • [Cai et al.2019b] Cai, S.; Zhou, S.; Xu, C.; and Gao, Q. 2019b. Dense motion estimation of particle images via a convolutional neural network. Experiments in Fluids 60(4):73.
  • [Chang and Chen2018] Chang, J.-R., and Chen, Y.-S. 2018. Pyramid stereo matching network. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 5410–5418.
  • [Dosovitskiy et al.2015] Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; and Brox, T. 2015. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE international conference on computer vision, 2758–2766.
  • [Goodman2005] Goodman, J. W. 2005. Introduction to Fourier optics. Roberts and Company Publishers.
  • [Heitz, Mémin, and Schnörr2010] Heitz, D.; Mémin, E.; and Schnörr, C. 2010. Variational fluid flow measurements from image sequences: synopsis and perspectives. Experiments in fluids 48(3):369–393.
  • [Holl, Koltun, and Thuerey2020] Holl, P.; Koltun, V.; and Thuerey, N. 2020. Learning to control pdes with differentiable physics. arXiv preprint arXiv:2001.07457.
  • [Horn and Schunck1981] Horn, B. K., and Schunck, B. G. 1981. Determining optical flow. In Techniques and Applications of Image Understanding, volume 281, 319–331. International Society for Optics and Photonics.
  • [Hui, Tang, and Change Loy2018] Hui, T.-W.; Tang, X.; and Change Loy, C. 2018. Liteflownet: A lightweight convolutional neural network for optical flow estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8981–8989.
  • [Ilg et al.2017] Ilg, E.; Mayer, N.; Saikia, T.; Keuper, M.; Dosovitskiy, A.; and Brox, T. 2017. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2462–2470.
  • [Iriarte Munoz et al.2009] Iriarte Munoz, J. M.; Dellavale, D.; Sonnaillon, M. O.; and Bonetto, F. J. 2009. Real-time particle image velocimetry based on fpga technology. In 2009 5th Southern Conference on Programmable Logic (SPL), 147–152.
  • [JHTDB] JHTDB. Johns Hopkins Turbulence Data Base.
  • [Kendall et al.2017] Kendall, A.; Martirosyan, H.; Dasgupta, S.; Henry, P.; Kennedy, R.; Bachrach, A.; and Bry, A. 2017. End-to-end learning of geometry and context for deep stereo regression. In Proceedings of the IEEE International Conference on Computer Vision, 66–75.
  • [King et al.2018] King, R.; Hennigh, O.; Mohan, A.; and Chertkov, M. 2018. From Deep to Physics-Informed Learning of Turbulence: Diagnostics. Workshop on Modeling and Decision-Making in the Spatiotemporal Domain, NeuralIPS 2018; arXiv:1810.07785.
  • [Lee, Yang, and Yin2017] Lee, Y.; Yang, H.; and Yin, Z. 2017. Piv-dcnn: cascaded deep convolutional neural networks for particle image velocimetry. Experiments in Fluids 58(12):171.
  • [Li et al.2008] Li, Y.; Perlman, E.; Wan, M.; Yang, Y.; Meneveau, C.; Burns, R.; Chen, S.; Szalay, A.; and Eyink, G. 2008. A public turbulence database cluster and applications to study lagrangian evolution of velocity increments in turbulence. Journal of Turbulence (9):N31.
  • [Liu et al.2015] Liu, T.; Merat, A.; Makhmalbaf, M.; Fajardo, C.; and Merati, P. 2015. Comparison between optical flow and cross-correlation methods for extraction of velocity fields from particle images. Experiments in Fluids 56(8):166.
  • [Menze and Geiger2015] Menze, M., and Geiger, A. 2015. Object scene flow for autonomous vehicles. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • [Perlman et al.2007] Perlman, E.; Burns, R.; Li, Y.; and Meneveau, C. 2007. Data exploration of turbulence simulations using a database cluster. In Proceedings of the 2007 ACM/IEEE conference on Supercomputing, 1–11.
  • [Rabault, Kolaas, and Jensen2017] Rabault, J.; Kolaas, J.; and Jensen, A. 2017. Performing particle image velocimetry using artificial neural networks: a proof-of-concept. Measurement Science and Technology 28(12):125301.
  • [Raffel et al.2018] Raffel, M.; Willert, C. E.; Scarano, F.; Kähler, C. J.; Wereley, S. T.; and Kompenhans, J. 2018. Particle image velocimetry: a practical guide. Springer.
  • [Ruhnau et al.2005] Ruhnau, P.; Kohlberger, T.; Schnörr, C.; and Nobach, H. 2005. Variational optical flow estimation for particle image velocimetry. Experiments in Fluids 38(1):21–32.
  • [Scarano2001] Scarano, F. 2001. Iterative image deformation methods in piv. Measurement science and technology 13(1):R1.
  • [Scharstein et al.2014] Scharstein, D.; Hirschmüller, H.; Kitajima, Y.; Krathwohl, G.; Nešić, N.; Wang, X.; and Westling, P. 2014. High-resolution stereo datasets with subpixel-accurate ground truth. In German conference on pattern recognition, 31–42. Springer.
  • [Stanislas et al.2005] Stanislas, M.; Okamoto, K.; Kähler, C. J.; and Westerweel, J. 2005. Main results of the second international piv challenge. Experiments in fluids 39(2):170–191.
  • [Stanislas et al.2008] Stanislas, M.; Okamoto, K.; Kähler, C. J.; Westerweel, J.; and Scarano, F. 2008. Main results of the third international piv challenge. Experiments in fluids 45(1):27–71.
  • [Stanislas, Okamoto, and Kähler2003] Stanislas, M.; Okamoto, K.; and Kähler, C. 2003. Main results of the first international piv challenge. Measurement Science and Technology 14(10):R63.
  • [Varon, Adler, and Eulalie2019] Varon, E.; Adler, J.; and Eulalie, Y. 2019. Adaptive control of the dynamics of a fully turbulent bi-modal wake using real-time piv. Experiments in Fluids 60:124.
  • [Yang and Ramanan2019] Yang, G., and Ramanan, D. 2019. Volumetric correspondence networks for optical flow. In Advances in Neural Information Processing Systems, 793–803.
  • [Yu et al.2006] Yu, H.; Leeser, M.; Tadmor, G.; and Siegel, S. 2006. Real-time particle image velocimetry for feedback loops using fpga implementation. Journal of Aerospace Computing, Information, and Communication 3(2):52–62.
  • [Zhao et al.2017] Zhao, H.; Shi, J.; Qi, X.; Wang, X.; and Jia, J. 2017. Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2881–2890.