### pof-schlieren-density-estimation

None

view repo

This study proposes a radically alternate approach for extracting quantitative information from schlieren images. The method uses a scaled, derivative enhanced Gaussian process model to obtain true density estimates from two corresponding schlieren images with the knife-edge at horizontal and vertical orientations. We illustrate our approach on schlieren images taken from a wind tunnel sting model, and a supersonic aircraft in flight.

READ FULL TEXT VIEW PDFNone

view repo

Since its inception in the late-17th century by Robert Hooke, schlieren has been used in the sciences, engineering and even the arts. The types of flows captured have been diverse, ranging from the shock waves of an aircraft and the mixing of turbulent jets to plumes over a candle, and even the dissolution of particles in a solution. Scientifically, these images have shaped our understanding of fluid mechanics, whilst more generally, they have and continue to inspire curiosity in the natural world. The schlieren method relies on the behaviour of light when passing through different mediums. Light either slows down or refracts depending on the homogeneous nature of the medium. When encountering a homogeneous medium, the light rays will pass through uniformly, but if the light rays pass through regions of inhomogeneity, they will refract in proportion to the gradient of the refractive index of the inhomogeneities Rienitz1975; settles2012schlieren.

Flow phenomena such as shock waves, jets and plumes can cause variations in the density of air, leading to refractions that can be captured by a focused, one-to-one optical image. Consider the simplest of schlieren setups: a point light source-based experiment that requires two convex lenses, a knife-edge and a camera, as shown in Figure 2. With the point light-source being the initial *illuminator*, the light is collimated by the first lens, and then refocused to a single point by the second lens, where a knife-edge is located, past which the light rays cast upon the screen to show an inverted image. The area between the two lenses is the test area where the object of interest is placed. The knife edge, which can be a standard razor, misses the upward deflected array, but blocks the downward deflected array. As a result the upward deflected array appears on the screen as a bright spot, while the downward deflected array leaves a dark spot. The knife edge effectively transforms an invisible phase difference between the light rays to manifest as a visible amplitude change, which is recorded on the screen as a change in the refractive index (The refractive index is given by the speed of light through a given medium divided by the speed of light in vacuum). Depending on the orientation of the knife edge, either the horizontal or the vertical gradient of the refractive index can be visible. As mentioned, although this description is of the most primitive of schlieren set-ups, it does capture the governing physics. In this case, a point-light source is assumed, while in general, an extended light source would be used which would require a focusing lens after the knife-edge settles2012schlieren.

The advantage of a schlieren image lies in the fact that the refractive index is dependent on the density of the medium it passes through. Knowledge of this relationship is one of the steps that allows us to obtain quantitative data from such images. Let be the spatially varying density of a fluid surrounding an object of interest; it is expressed as a scalar-field with a horizontal coordinate and a vertical coordinate . It has a linear relationship with the refractive index , given by

(1) |

where is the Gladstone-Dale coefficient. A schlieren image represents the refractive index gradient field , where the gradient direction is based on the orientation of the knife edge: placement in the vertical direction yields , whilst placement along the horizontal direction renders hargather2012comparison.

There are two points to note regarding the overall quality of a schlieren image. First, the diffraction of light from the object of interest can degrade the quality of the schlieren image. This may be exacerbated around the edges of the test object where halos are found to appear. However, these do not introduce a significant impediment when used for quantitative analysis as they do not obscure the flow detail Kumar2008. Second, is the prominence of dark regions in the schlieren image where it should be bright—this is suggestive of a region with high density gradients, where the light rays are highly refracted and blocked by some component in the schlieren set-up such as the knife-edge/lens mount settles2012schlieren (e.g. Figure 1 (b)). This may limit the accuracy of quantitative (and indeed qualitative) information that may be extracted from the region and its vicinity.

When an object is in supersonic flow, thin regions of discontinuity can form across which flow properties change rapidly. Across these *shock waves*, static flow properties such as pressure, density, temperature and entropy increase, while the stagnation pressure, Mach and velocity decrease Anderson2011Fundamentals. Oblique shock waves form across concave corners, where the shocks generally form at an oblique angle (less than ). In regions of convex corners, the opposite effect occurs, resulting in *expansion fans*, across which the static pressure, density, and temperature decrease, while the Mach increases. This is shown in Figure 3, where the flow follows the path from to across an expansion fan with an expansion angle .

As the flow must be tangential to the wall at all times, the streamlines will follow the direction of the wall. When considering a two-dimensional supersonic flow of a wedge, a sharp change in the direction of the streamline occurs across the shock, whereas, for the conical flow, as there is a third dimension for the air to pass through, a “three-dimensional relieving effect” facilitates a smoother transition to the new wall angle (See Figure 4). This has the added effect of creating a weaker shock wave than a wedge shock with the same wall angle () and inflow properties. Hence, under the same conditions. Apart from the weaker shock and the less adverse streamlines, the flow behaviour is similar between the wedge and the cone.

Assuming a steady state shock in a predominantly inviscid flow (except near the shock) under adiabatic conditions, the downstream Mach number and density across an oblique shock wave for a wedge, is given by (see page 623 in Anderson2011Fundamentals)

(2) |

where, is the wave angle, is the deflection angle, is the specific heat ratio, and are the downstream Mach and density. These expressions, in tandem with other isentropic flow relations, may be used to estimate density from other observed thermodynamic quantities.

Although predominantly used for qualitative assessments, there is a considerable body of literature devoted to the singular task of extracting quantitative information from schlieren images hargather2012comparison; wildeman2018real; hay2019sampling; tobin2016quantitative; venkatakrishnan2005density; dalziel2000whole. While a typical qualitative analysis may study the visible shock angles and refraction patterns, a quantitative analysis focuses on extracting primitive flow field variables—e.g., density, temperature or velocity from the schlieren images. The overarching framework relies on being able to relate the pixel values across the image to a known refraction angle settles2012schlieren; Settles2017. We remark that our definition of quantitative in this study refers strictly to the extraction of the density scalar field from schlieren images.

Depending on the schlieren set-up, images may need to be processed to account for poor contrast, brightness, and sharpness (to name a few) before use in quantitative analysis. The pixel-values of a schlieren image do not necessarily always encompass the entire 256 levels of greyscale possible; instead the brightest pixel may be a fraction of this. By setting the value of the brightest pixel to the highest greyscale level of 256, and proportionally increasing the value of the rest of the pixels, we can form a new image with a higher contrast. This is known as contrast stretching. The brightness of an image can be increased by simply increasing all the pixel values by a constant, with a maximum limit set to 256. The sharpness of the image can be improved using sharpening techniques such as unsharp masking. This method uses a mask created by the difference between the source image and a blurred version of the image to detect edges. The contrast of the edges covered by the mask are increased to create a sharper image (See Section 3.2.4 in Settles settles2012schlieren for pre-processing techniques for schlieren images).

The most straightforward method is to place a calibration object with a known refractive index in the test area which would allow the generation of a map between the pixel intensity and the refractive index gradient. Typically, a lens is used, with each radial position across the lens having a different refraction angle (or refractive gradient) to focus the collimated light. The focal length of the calibration lens determines the resolution of the refractive index variations, with finer resolutions being obtainable with longer lenses. Imaging using this setup yields the calibration required between the pixel intensity and the true refractive index gradient. The refractive index gradient field is integrated to obtain the refractive index which is converted to a density field using the Gladstone-Dale relationship from (1), with a known density value used to solve for the constant of integration (typically the known free-stream density). This assumes the fluid under consideration is not a mixture of gases, and hence has a constant value of . To be able to quantify for a mixture of gases, an estimate for the Gladstone-Dale constant, would be required settles2012schlieren; Settles2017.

Rainbow schlieren (discovered by Schardin Schardin1942; Vandiver1974, but termed by Howes Howes1983; Howes1984) substitutes a knife-edge with a radial rainbow cut-off filter with a continuous spectrum to generate an image with hue variations. The light passing through the colour cut-off filter will take the hue corresponding to its path along the filter. As the rainbow filter is continuous—i.e., without any sharp edges or discontinuities—errors due to diffraction are effectively non-existent. The replacement of the filter leads to the refractive index gradients being displayed as variations in colour rather than irradiance hargather2012comparison. Use of the colour filter does not by itself provide vastly more information over the standard greyscale schlieren image. However, its advantage lies in the ability to use different colour filters to colour-code targeted refraction directions/magnitudes. For example, the use of a bulls-eye pattern would highlight the refraction magnitudes that occur in the radial direction. Obtaining quantitative data from this setup would follow the described calibration approach from above, but instead of the pixel intensity of a greyscale schlieren image, the hue from colour schlieren image would be mapped to the known refraction angle settles2012schlieren; Settles2017.

The premise of BOS—a *synthetic* schlieren method—is to measure the level of distortion in a background pattern with and without the refractive disturbances. This method requires little set-up and equipment as only a high resolution camera is required, and possibly a background pattern depending on whether the natural background behind the object of interest has enough variation for image processing to pick up patterns and image distortion hargather2012comparison; Heineck2016; Heineck2021. Venkatakrishnan venkatakrishnan2005density demonstrated the construction of a density field using BOS for a cone-cylinder flow at Mach 2.0. Additionally, work conducted by NASA has led to the use of natural backgrounds to capture schlieren images of different aircraft by using the sun/sky as the background Heineck2019; Hill2017a; Hill2017b.

To estimate the density field using BOS, a reference image of a structured background pattern without density gradients is taken, along with another image with the refractive disturbances active. The displacements along and in the background pattern are computed between these two images using cross-correlation, and used to estimate a source term , which is the sum of the derivative of the density gradients. The individual density derivatives can then be obtained by substituting the empirical values of in Poisson’s equation

(3) |

with appropriate boundary conditions (see 6 in venkatakrishnan2004density). Note that this provides a projection of the 3D density field in the camera direction, i.e. it is the integrated density distribution across the line of sight. By using standard tomographic reconstruction techniques such as filtered back-projection, the density field across a plane of interest can be obtained venkatakrishnan2005density.

More quantitative methods such as absolute and standard photometry methods are described in detail in Settles settles2012schlieren, Chapter 10. We omit an exposition of these methods here for brevity. It is important to note that all prior quantitative methods are limited, i.e., they require either a special apparatus with a reference calibration within the image, or multiple images of the same object with control over a refractive distortion. This makes it particularly difficult to re-create density fields of past schlieren images—especially those that were taken with only a qualitative analysis in mind. Additionally, none of these methods offer a comprehensive pathway to account for the uncertainties in the density (or other thermodynamic quantities).

Our approach to schlieren imaging in this paper takes a decidedly alternate route from the prior quantitative efforts referenced above. We leverage ideas from machine learning, namely Gaussian process regression rasmussen2006, to facilitate the spatial estimation of density. To set the stage, we begin by defining a few quantities. Let be the spatially varying density field, be the schlieren image, taken with a vertical knife-edge, and be the schlieren image, taken with a horizontal knife-edge.

A Gaussian process is a spatially varying function, which at any spatial location admits a Gaussian distribution. As our focus will be on bivariate functions, we introduce a Gaussian process as

(4) |

which is entirely defined by its mean and covariance functions. The covariance function tries to imbue a spatial relationship across spatial locations and is set by a kernel function , where the two arguments correspond to two different spatial locations. Standard choices for the kernel function include linear, squared exponential, and Matérn to name a few (see Chapter 4 in rasmussen2006 for more). These and others can be tuned to characterise some of the physics-driven phenomenon that defines how the value of some quantity of interest will varying from location to location. Kernel functions in turn are parameterised by certain *hyperparameters*

, whose values are typically the outcome of an optimisation—via maximum likelihood estimation or cross validation—or extensive sampling procedure—via Markov chain Monte Carlo (MCMC) as elucidated further below.

From a regression context, the problem is to determine (4) given observations , where are the input spatial coordinates at which output function values are observed. Further, one typically assumes that each output function value is corrupted by a small observational or sensor noise , where for simplicity we assume that . This data can be used to define a *joint distribution*

(5) |

where the entries of the kernel matrices is given by

(6) |

(7) |

and

(8) |

In (5), corresponds to the noise matrix, which we assume to be , where is the identity matrix. Expressions for the mean and covariance of at any conditioned upon the input-output data pairs is then written as

(9) |

(10) |

which are standard expressions, as found in rasmussen2006.

Gaussian process regression can be interpreted as the maximisation of the probability of observing the (noisy) data, given certain structural assumptions encoded into the kernel function

about the data. Infact, within a Bayesian context, these concepts are referred to as theThis subsection describes the changes needed to the joint distribution to account for the schlieren image inputs. Let denote the horizontal and vertical locations of pixels in a schlieren image. Let be the intensity values at these pixels. From the Gladstone-Dale expression 1, we can define the linear relationship between the density gradient and the refractive index gradient field captured by the schlieren apparatus

(11) |

(12) |

We define another joint distribution, which is also Gaussian, by applying the linear differential operator on (5), yielding

(13) |

where the entries of the kernel matrices is given by

(14) |

from which the density can be written by

(15) |

where

(16) |

(17) |

Note that in constructing (15) we have effectively estimated from observations of and .

A few additional comments are in order here. First, although the noise matrix was defined to be diagonal, it need not be and correlations can be imposed. Furthermore,

can have different variances, depending on the corresponding pixel values in

. There is some rationale behind such a proposal: given that pixel values in schlieren images are cropped to fall within , regions that have very low, or very high refractive gradients will likely look similar. Thus, there will be greater uncertainty in the pixel values in such regions, compared to others.Second, from the block matrices in (13), it should be readily apparent that we are limited to twice differentiable kernel functions . Throughout this paper we adopt the squared exponential kernel, which in two dimensions is given by

(18) |

where,

where are the tunable hyperparameters, which control the signal variance and the length scales respectively. As alluded to before, suitable values of the hyperparameters can be obtained by solving an optimisation problem.

A subset of the joint distribution defined in (13) can be used if an image is available for only one gradient direction with the caveat that the posterior mean will not fully represent the density field unless the flow is highly directional and the gradient in the other direction is negligible. The joint distribution for a case where is available would be

(19) |

with replaced by if is available instead.

The free hyperparameters

are random variables that are unknown. By using knowledge of the potential range or physical nature of the model, it is possible to define a set of prior probability distributions

. If knowledge regarding the observed data is unknown, this can be captured by a prior distribution with a large variance. In this study, prior distributions are defined for with reduced model complexity to allow for generalisation of the model to multiple cases. To ensure a positive semi-definite covariance matrix, the hyperparameters defined here must be positive. Additionally, to ensure that the length scales and kernel noise terms are not wildly different at the gradient- and functional levels, we introduce a re-parameterisation of the form(20) |

where, , , , and are assigned half-normal prior distributions. With this new prior definition, we set . When observation data for only one gradient is available, the hyperparameters simplify to the original .

These free hyperparameters () require tuning to find the optimal values to give the expected posterior mean. This can be via a straightforward grid or random search but with the downside of being intractable when working with a larger number of hyperparameters Claesen2015. There are instead automated methods that approach this search from a more analytical perspective.

A well-worn approach is to maximise the logarithm of the marginal likelihood, conditioned on the hyperparameters rasmussen2006. The values found from such a method are such that they maximise the likelihood that the user-specified model produces the data that is observed.

(21) |

where the mean and covariance terms are evaluated at the observed data only. A more developed approach is the Maximum a Posteriori (MAP) estimation method which in addition to the log-likelihood () takes advantage of knowledge in the prior distribution with the term. The prior distribution creates a bias of the probability mass density towards regions that are preferred a priori Goodfellow2016, which can be advantageous when trusted prior knowledge of the observed data is available.

(22) |

Both the marginal likelihood and MAP make predictions based on a point estimate of which can be advantageous from a performance standpoint, but have the side-effect of focusing on a local maxima and not widely exploring the probability space. Ideally, a fully Bayesian approach would be used where instead of a point estimate, predictions would be made with a full distribution over , where, for each observed sample, either a positive probability would contribute for the next sample and any uncertainty would be accounted for in any predictions made Goodfellow2016. However, it is intractable to attempt to make predictions using a full posterior distribution over .

One approach to achieve an estimated result is by approximating the posterior distribution by random sampling. A popular method is Markov chain Monte Carlo (MCMC). MCMC can also be used to generate an estimate of the posterior distribution where it cannot be calculated analytically. MCMC performs approximate integration by drawing samples from the posterior distribution and performing sample averages. The Markov chain refers to the drawing of samples, where, the next sample drawn is dependent on the previous, and Monte Carlo is the technique used for random sampling of the posterior distribution for integration Goodfellow2016. By running multiple chains concurrently with varying starting points, it is possible to capture multiple states of equilibrium unlike the point estimate approaches.

This paper offers a radically different way to think about schlieren, whilst staying true to its fundamental physics, building upon our preliminary work towards quantitative schlieren using machine learning ubald2021quantitative. An overview of the present workflow is illustrated in Figure 5.

Succinctly stated, we build a Gaussian process rasmussen2006 model that ingests schlieren images as proxy density gradients, along with a three discrete spatial locations where an estimate of the density can be obtained. Thus, the predicted density field is characterised by a spatially varying *posterior* distribution

(23) |

defined entirely by its mean and covariance

functions. We remark here that our work represents one of the first forays of utilising a well-worn machine learning tool towards quantitative schlieren. In the results that follow, we report the mean and standard deviation, i.e.,

output by our model. As these moments are for the density field, both have units of

.Consider the asymmetrical sting model in supersonic flow ( 2.0) published by Ota et al. ota2011schlieren. The reconstructed density published in that paper, using BOS, is assumed to be representative of the *true* density, and thus serves as our benchmark for comparison. Figure 6 captures the experimental results with the vertical knife-edge schlieren image in (a); horizontal knife-edge schlieren image in (b), and the BOS yielded density reconstruction in (c). Note that for subfigures (a) and (b) the colours denote greyscale pixel values between , whilst the colours in (c) represent the density in units of . Another point to note is that only the upper half of the schlieren images are used for this reconstruction, as the shock captured across the lower half sees excessive clipping.

The reconstructed density is shown in Figure 7 with the mean in (a) and the standard deviation in (b). The mean follows the expected trend of a uniform density upstream of the shock followed by a rapid increase in density across the shock, before decreasing across the expansion fan. Due to clipping of the input schlieren images, the gradients in region downstream of the expansion fan are not captured (the gradients are close to zero), hence this is represented as the mean of the observed densities instead. We remark that the BOS density in subplot Figure 6(c) does not follow the expected behaviour of oblique shocks in theory, where, the density upstream of the shock must be uniform with a sharp increase in density across the shock. This may be caused by optical distortion incurred when performing BOS. Additionally, the shock itself should be captured as a very thin band, which is what is captured in our reconstruction.

Next, we study schlieren images taken by NASA of a T-38 training jet at a Mach number of 1.05. The horizontal and vertical knife-edge schlieren images captured in Heineck2016; Heineck2021 are shown in Figure 8(a) and (b) respectively. Additionally, based on estimates of the ambient conditions of the flight and estimates of shock and wedge angles, we can estimate the density at three points as shown in Figure 8(c). The two freestream points are assigned the same value of density, whilst the density at the third (remaining) point is estimated via the shock relations above.

As true density fields are not available for this case, and at least one known density is required from a non-freestream region for this method, the density after the first oblique shock wave is estimated. Two quantities are used to calculate the density: (i) the known experimental altitude of 30,000 ft and thus the freestream density; (ii) the shock relations in Equation 2, accounting for uncertainties in the Mach number and angles. However, note that this is a simplification, as the wedge angle calculation does not account for the three-dimensional relief effect. The flow conditions at this altitude and the estimated uncertainties are listed in the appendix in Table 5. This density data, along with horizontal and vertical knife-edge schlieren images are sub-sampled (to reduce the computational overhead), and fed into the machine learning workflow.

We have demonstrated the development of a method to obtain quantitative density data from pre-existing Schlieren images through the use of Gaussian processes. The central novelty with this method lies in the fact that it does not require any pre-calibration or special set-up as required by existing methods discussed in Section I.2, and is widely applicable—even if only one knife-edge orientation is available. Our workflow is not intended to upend or subvert the modern practicalities of BOS, but rather complement it, by enabling scientists to be able to retrospectively reconstruct density fields from the plethora of schlieren images that exist in literature.

The data and all aspects of the code that support the findings of this study, including the figure generation are openly available in the ‘pof-schlieren-density-reconstruction’ repository at ‘github.com’ (https://github.com/bnubald/pof-schlieren-density-reconstruction).

*

In this section we present a validation of our approach along with supplemental particulars for the two examples provided above.

A simple analytical function is used to define a model density field and its corresponding density gradient. The density gradients are normalised to pixel values () to emulate a Schlieren image with an unknown range, and a random sampling of 200 are selected from this image for use as observation data (white circles in Figure 10), with three random density sampling locations also used (green circles in Figure 10). The density and its partial derivative w.r.t and for this case are given as

(24) |

Parameter | Distribution | Moments |
---|---|---|

half-normal | ||

half-normal | ||

deterministic | 1.0 | |

half-normal | ||

deterministic | 1E-4 | |

deterministic | 0.1 |

The prior definitions for this case are listed in Table 1, with Figure 11

showing a comparison of the true density (a) against the posterior predicted mean density (d), and the corresponding prediction of the gradients. Figure

12 (a) shows the relative error between the true and predicted density fields, with subplot (b) plotting this data as a histogram. With the majority of relative error being around 0.0, and the posterior standard deviation shown in (c) exhibiting a generally uniform and small standard deviation demonstrates a high level of model confidence.If the flow is highly directional, and the gradients in one direction dominate the flow, such is the case in this analytical function, where the gradients in are dominant. The joint distribution can be simplified to the one in Equation 19, with simpler prior definitions as listed in Table 2. A comparison between the true and partially reconstructed density field shown in Figure 13 shows good agreement, with the relative error (Figure 14 (a)) being higher than with the full reconstruction as one would expect as the gradients in are omitted. The method is still, however, able to capture the correct trends and provide an indicative result.

Parameter | Distribution | Moments |
---|---|---|

half-normal | ||

half-normal | ||

half-normal | ||

deterministic | 1E-4 | |

deterministic | 0.1 |

Priors for the wind tunnel sting model Gaussian process are given in Table 3.

Parameter | Distribution | Moments |
---|---|---|

half-normal | ||

half-normal | ||

half-normal | ||

half-normal | ||

deterministic | 2.5E-4 | |

deterministic | 0.25 |

Priors for the wind tunnel sting model Gaussian process are given in Table 4; flow properties used to estimate the density are provided in Table 5.

Parameter | Distribution | Moments |
---|---|---|

half-normal | ||

half-normal | ||

deterministic | 1.0 | |

half-normal | ||

deterministic | 0.06 | |

deterministic | 1.0 |

Flow property | Value |
---|---|

0.459 0.005 | |

0.5 0.06 | |

1.401 0.001 | |

1.05 0.05 | |

77 2 |

Comments

There are no comments yet.