Dimensional analysis is based on the simple idea that physical laws do not depend on the units of measurements. As a consequence, any function that expresses a physical law has the fundamental property of so-called generalized homogeneity  and does not depend on the observer. Although such concepts of dimensional analysis go back to the time of Newton and Galileo , it was formalized mathematically by the pioneering contributions of Edgar Buckingham in 1914 . Specifically, Buckingham proposed a principled method for extracting the most general form of physical equations by simple dimensional considerations of the seven fundamental units of measurement: length (meter), mass (kg), time (seconds), electric current (Ampere), temperature (Kelvin), amount of substance (mole), and luminous intensity (candela). From electromagnetism to gravitation, measurements can be directly related to the seven fundamental units, i.e. force is measured in Newtons which is kgm/s and electric charge by the volt which is kgm/(sA). The resulting Buckingham Pi theorem was originally contextualized in terms of physically similar systems , or groups of parameters that related similar physics. In modern times, rapid advancements in measurement and sensor technologies have produced data of diverse quantities at almost any spatial and temporal scale. This has engendered a renewed consideration of the Buckingham Pi theorem in the context of data-driven modeling [4, 5, 6, 7]. Specifically, as shown here, physics-informed machine learning algorithms can be constructed to automate the principled approach advocated by Buckingham for the discovery of the most general and parsimonious form of physical equations possible from measurements alone.
Buckingham Pi theory was critically important in the pre-computer era. Indeed, the discovery of key dimensionless quantities was considered a fundamental result, as it often uncovered the self-similarity structure of the solution along with parametric dependencies and symmetries. In fluid dynamics alone, for instance, there are numerous named dimensionless parameters that are discovered through the Buckingham Pi reduction, including the Reynolds, Prandtl, Froude, Weber, and Mach numbers. Scaling and dimensional analysis continue to provide a robust starting point for understanding both complex and basic physical phenomena, like the physics of wrinkling , chaotic patterns in Rayleigh-Bénard convection , the structure of drops falling from a faucet , spontaneous pattern formation by self-assembly , and osmotic spreading of biofilms . In practice, many dimensionless groups naturally arise from ratios of domain averaged terms in physically meaningful equations that are required to be dimensionally homogeneous. When equations are non-dimensionalized, dimensionless terms that are small on average relative to others can be treated as perturbations, thus providing insights into the qualitative structure of the solution and simplifying the problem of finding it. More importantly, Buckingham Pi provides the potential for an order parameter description  that determines the qualitative structure of the solution and its bifurcations without recourse to the full-state equations . These order parameters form the basis for casting problems into normal forms that reveal physical symmetries and can potentially be solved analytically [15, 16, 17]. Even when governing equations are unknown, Buckingham Pi establishes a dimensionally consistent relationship between the input parameters/variables and output predictions which help constrain models and prevent over-fitting.
The rise of scientific computing in the 1980s made it possible to numerically integrate highly nonlinear ordinary and partial differential equations, without the need for analytic simplifications provided by dimensional analysis. More recently, machine learning algorithms have shown promise in discovering scientific laws, differential equations [19, 20, 21], and deep network input-output function approximations [22, 23, 24] from simulation and experimental data alone, with considerable progress in the field of fluid mechanics [25, 26, 27, 28]
. Particularly, unsupervised learning techniques have recently made significant progress in distilling physical data into interpretable dynamical models that automate the scientific process[29, 30, 31, 32, 33]
. However, these methods do not take into account the units of their training data, which can significantly constrain the hypothesis class to physically meaningful solutions. Principal component analysis (PCA), which is a leading data analysis method
, attempts to circumvent the idea that physical laws do not depend on the units of measurements by taking each measurement variable and setting its mean to zero and its variance to unity. However, there are more sophisticated methods for explicitly considering measurement units. Active subspaces provides a principled algorithmic method for extracting dimensionless numbers from experimental and simulation data, having successfully been demonstrated to discover dimensionless numbers in particle-laden turbulent flows 
. In a related line of work, statistical null-hypothesis testing has been used alongside the Buckingham Pi theorem to find hidden variables in dimensional experimental data. Dimensional analysis has also been used for a physics-inspired symbolic regression algorithm that discovers physics formulas from data 
and to improve neural network models. Data-driven dimensional analysis has also been applied to model turbulence data . Machine learning algorithms that use sparse identification of differential equation to discover dimensionless groups and scaling laws that best collapse experimental data have also been recently proposed .
In this study, we address the problem of automatic discovery of dimensionless groups from available experimental or simulation data using the Buckingham Pi theorem as a constraint. Importantly, although the Buckingham Pi theorem provides a step-by-step method for finding dimensionless groups directly from the input variables and parameters, its solution is not unique. Thus it relies on intuition and experience with the physical problem at hand. If the underlying equations are available, finding dimensionless groups is constrained by the form of the equation and they are computed by dividing the system average magnitude of every term by a reference term. Even then, the problem of determining the relevant dimensionless numbers depends on experience. Our aim is to leverage symmetries in the data to inform the algorithm of which set of dimensionless groups control the behavior (e.g. the Reynolds number controls the turbulence of a flow-field). We develop three methods that find the most physically meaningful Pi groups (See Fig. 1), assumed to be the most predictive of the dependent variables. The first method is a constrained optimization that fits independent variable and parameter inputs to predictions, while satisfying the Buckinham Pi theorem as a hard constraint. The second method uses a Buckingham Pi (nullspace) constraint on the first hidden layer of a deep network that fits input parameters to output predictions. And the third method uses SINDy [19, 20] to constrain available dimensionless groups to be coefficients in a sparse differential equation. The latter method has the added benefit of discovering a parametric (dimensionless) differential equation that describes the data, thus simultaneously generalizing the SINDy method. We apply these methods to three problems: the rotating hoop, the Blasius boundary layer, and the Rayleigh-Bérnard problem.
2 Buckingham Pi Problem Formulation
Let be a mapping where
is a vector of input measurement parameters and variables, andis a function that maps to a set of dependent quantities of interest . One generally seeks a predictor that maximizes the size of the set of possible measurable parameters and variables , for which the prediction error is minimized, with . The function is typically the solution of an initial and/or boundary value problem with known variables and parameters and unknown solution , all of which have physical dimensions [units].
The Buckingham Pi theorem states that there exists a set of dimensionless quantities , also known as groups, such that the dimensionless input parameters/variables span the full dimensionless solution space , with and . Furthermore, the theorem determines the value of given the units of the measurement variables and parameters.
A probabilistic corollary of the Buckingham Pi theorem is that the input-output data can be projected on a lower dimensional space with basis without compromising the prediction error , such that,
where is an unknown function that can be approximated from available data. However, the Buckingham Pi theorem does not provide any information on the properties of or its relationship with the Pi groups. The theorem generally assumes , , and to be deterministic without considering the optimality of a dimensionless basis with the generalization properties of the mapping .
In a data-driven statistical setting, we assume that the inputs and outputs are sampled from a distribution function , and that is an optimal mapping in a given class of function. In this context, we generally seek the most physically meaningful transformation that provides the best fit , assuming to be an optimal predictor over an pre-determined hypothesis class . Accordingly, we posit the following hypothesis
Given a set of input and output measurement pairs , the most physically meaningful dimensionless basis is the optimal coordinate transformation that satisfies the minimization problem
where physical meaningfulness is defined as the dimensionless group’s ability to simplify an equation in relevant regimes and provide a scaling that collapses the input-output solution space to a lower dimension.
Note that defining physical meaningfulness often depends on the historical and scientific context, which makes the above hypothesis subject to interpretation. In some cases, theoretical and experimental tradition narrows a field of study down to a set of well-known dimensionless numbers with which known regimes and behaviors are associated (e.g. Reynolds number quantifying turbulence in fluid mechanics). Particularly, when analytical models and equations are available, dimensionless numbers that arise naturally from the relative measure of different terms are physically significant because they quantify changes in the qualitative nature of the solution. Hypothesis 1 proposes a new approach for defining physical meaningfulness by first providing evidence for its validity in Sec.3. The purpose of the proposed methods is to discover dimensionless groups in contexts where data is available but analytical models are not known.
This hypothesis is further constrained by the functional relationship between and given by
where are unknown parameters that are chosen to make dimensionless.
Let be a function that maps a measurable quantity to a vector containing the powers of its basic dimensions (i.e. mass , length , time , etc.). For example, if one chooses as the basic dimensions, then the units of a force are , and . Let be a function that maps a dimensionless group to a vector that corresponds to the powers of the input parameters such that , where .
The Buckingham Pi theorem  constrains to be in the null-space of the units matrix
Here, we assume that the number of dimensional and dimensionless predictions are equal, i.e. , which is often the case, and we later propose methods for relaxing this assumption. The Buckingham Pi theorem determines the number of Pi-groups to be . The main shortcoming of the theorem is its inability to provide a unique set of
dimensionless numbers. In practice, additional heuristic constraints are required to solve for the unknowns.
We present three methods for discovering dimensionless groups from data. The main features that differentiate them are
Imposing hard, soft, or no constraints on the null-space of to ensure that the Pi-groups are dimensionless according to equation (5).
The type of input-output function approximation : e.g. neural network, non-parametric function, or differential equation.
Let be a set of measurement pairs of the inputs vector and their corresponding output predictions , with index . We define the input parameter matrix and the output prediction matrix as the row-wise concatenation of all measurements and respectively. We also define as the column-wise concatenation of and . The Pi-groups powers matrix is given by
such the equation is satisfied according to Eq.(5). Similarly, we define to contain the dimensionless groups corresponding to the input parameters/variables . Equation (3) can be written as , corresponding to the matrix form
where each row in corresponds to the values of the dimensionless groups for a given experiment , and the exponential and logarithm are both taken element-wise. We also define the matrix to be the data matrix of the output Pi-groups and to be the data matrix of the input Pi-groups , such that .
For example, consider the pendulum shown in Fig. 1. If we define the inputs to be gravity , mass , length , and time , and the output to be the angle , then the dimensional input and output parameter vectors and are
The corresponding units matrices and constructed with basic dimensions mass, length, and time are
Note that the angle is already dimensionless, so is the zero vector. A possible matrix consisting of columns spanning the nullspace of is
Then and , so that the input/output relationship is some function . We solve this example using a data-driven method in Sec. 4.1.
In this case we know from basic mechanics that for small , but for more complicated problems it may not be obvious what the appropriate and are. This section introduces three methods to simultaneously identify the nondimensionalization and an approximation of from a set of experimental or simulation data. In this data-driven context, (7) can be combined with the nullspace constraint, , to optimize over a pre-defined hypothesis class given the measurement data.
3.1 Constrained optimization
Equation (2) can be cast as a constrained optimization problem, with a fitting function that can be either parameter or non-parametric. The input Pi-groups powers matrix maps the input parameters/variables to the dimensionless groups through equation (7), and the constraint of the optimization is given by equation (5). The choice of the hypothesis class of is assumed to be arbitrary in this formulation, notwithstanding its effect on the accuracy of the results and the success of the method.
In this setup, we assume that the outputs, , can be non-dimensionalized by known constants of motion (i.e. is known). Accordingly, the resulting optimization problem is given by
where the regularization enforces sparsity (typical in dimensionless numbers) and the
regularization encourages smaller powers. An example application of this approach that uses kernel ridge regression as a non-parametric approximation ofis given in Sec. 4.3.
3.2 BuckiNet: non-dimensionalization with a neural network
The unknown function is generally nonlinear and can be high dimensional. Given input-output data, in the absence of a governing equation, a deep neural network is a natural candidate for approximating . Equation (7) can be naturally integrated in a deep learning architecture by first applying a transform to the inputs
, then using an exponential activation function at the output of the first layer. This makesthe fitting weights of the first layer, as shown in Fig. 2. We call this layer the BuckiNet. When negative input data is given, it has to be pre-processed so that the initial operation becomes possible. In most cases, a simple shifting of the data to the positive domain is sufficient.
This technique offers many advantages. First, the BuckiNet layer implicitly performs an unsupervised dimensionality reduction without any adjustment to the overall architecture of the deep network. This results in better generalization properties and a faster optimization thanks to a loss-less reduction in the number fitting parameters. The number of nodes in the first hidden layer is as determined by the Buckingham Pi theorem. Second, the optimal weights of the first layer,
, correspond to the powers of dimensionless numbers that are most predictive of the solution. Third, the BuckiNet can be easily added to deep learning algorithms that fit input-output measurement data, with a few lines of code on Tensorflow or PyTorch.
The BuckiNet architecture on its own does not guarantee a dimensionless combination of the input parameters. Whether the network discovers a dimensionless basis without any constraint is discussed in Sec.4.1. To explicitly account for the constraint in equation (5), we add a null-space loss on
in the loss function. Therefore, the total loss is given by
In contrast to the constrained optimization problem proposed in the previous section, this method minimizes the null-space according to equation (5) but does not satisfy it exactly. However, this proves to be sufficient for finding Pi-groups with sparse and approximately rational exponents as shown in Sec.4.2. The regularization term includes the loss that promotes dimensionless groups with lower powers, and the loss that promotes simple groups that have as little input variables/parameters as possible.
3.3 Sparse identification of dimensionless equations
The previous two methods addressed the problem of data-driven non-dimensionalization by simultaneously optimizing for the fitting function and the dimensionless groups . In light of the fact that is usually the solution of a differential equation, we propose casting the problem as a sparse identification of differential equations (SINDy) [19, 20] with candidate dimensionless groups as coefficients. This constrains the dimensionless groups to be physically meaningful according to their associated differential operators that act on the unknown output variables, which is also the intuition often used in classical non-dimensionalization.
In the following formulation, we assume that the time is the only dependent variable, and non-dimensionalize it separately from the rest of the input parameters. However, the method can be generalized to any number of dependent variables. For a dimensionless quantity of interest , our goal is to learn an equation of the form
where is a characteristic time scale, is the corresponding dimensionless time, is a vector of dimensionless parameters, and is a differential operator. In the absence of a governing equation, SINDy approximates by a sum of differential operators with fitting coefficients that are optimized for both sparsity and prediction error . That is, given input-output pairs sampled from a time series, SINDy minimizes the loss
where contains unknown fitting coefficients and is a pre-determined library of potential candidate functions and differential operators, a linear combination of which makes up the approximation . For dimensional consistency, derivatives in the dependent variable are scaled by an unknown problem specific timescale , which we assume can be expressed as a function of the input parameters: . In essence, the timescale is treated in the same manner as the dimensionless groups , except that its dimensions are constrained to be those of time.
Assuming that , and its corresponding dictionary , are separable, we can write
where is a dictionary of derivatives in the dimensionless quantity of interest , is a vector of dimensionless input parameter excluding time, is a dictionary of polynomial functions, and is the Kronecker product which is a vectorization of the outer product of two vectors. The assumption of separability is not strictly necessary; candidate functions may be chosen that are functions of both and . However, SINDy libraries are often constructed as multinomials, so that the variables are separable. Moreover, the parameters typically appear as coefficients of the state variable in both bifurcation normal forms and Taylor series approximations of dynamical systems, so the assumption of separability is natural.
The resulting feature space dimension is . Having measurements, we define the full dictionary has dimensions , where every row is an examples (i.e. a sample in time) and every column a potential term in the differential equation. Accordingly, where is the dimension of .
While equation (15) takes dimensionless pairs as input data, we would also like to optimize over candidate non-dimensionalizations. That is, we would like to minimize the following loss function
where the input-output pairs are sampled from the dimensional data, and is the Pi-groups powers matrix that defines the mapping given by equation (7).
Equation (17) does not include a constraint on (Eq. (5)) to ensure that is dimensionless. To address this issue, we generate a finite number of candidate dimensionless numbers up to a predetermined fractional power that satisfy equation (5). The resulting set of non-dimensionalizations correspond to a set of power matrices over which we minimize the loss
with . depends on the range of predetermined powers from which dimensionless numbers are sampled according to the null-space condition in (5).
To avoid this “brute-force“ search, the dimensionless SINDy method could be combined with the constrained optimization approach described in Sec. 3.1. In general, this tactic would be more flexible, since it also allows for non-integer powers in the dimensionless groups. We use brute-force enumeration in this work because we expect integer powers, there are relatively few nullspace vectors, and the sparse regression problem can be solved efficiently, so this method scales relatively well in this particular case. We explore a more efficient approach in future work. The general computational efficiency of the brute-force search approach will be discussed in the results.
In summary, this method solves two problems at once, providing
Dimensionless input parameters and dimensionless dependent variables .
A sparse and parametric dynamical system .
While we only consider time as a dependent variable in this section, a generalization to spatial and other dependent variables is straightforward. The method also allows for combining known dimensionless numbers with unknown ones as often encountered in practical problems.
In this section, we apply the three methods presented above on four non-dimensionalization problems: the harmonic oscillator in Sec. 4.1, the bead on a rotating hoop in Sec. 4.2, the Blasius boundary layer in Sec. 4.3, and the Rayleigh-Bénard problem in Sec. 4.4 . We discuss the advantages and shortcomings of each method in terms of accuracy, robustness, and speed in the context of our proposed hypothesis.
4.1 Non-dimensionalization as optimal change of variables: harmonic oscillator
Dimensionless groups can act as scaling parameters (i.e. revealing self-similarity), as they often arise in analytical solutions. In some problems, Pi-groups can be deduced from an optimal change of variables without the need for a Buckingham Pi constraint. While this is not always the case, it supports the hypothesis that an optimal mapping between input and output data gives rise to a physically meaningful change of variables. We demonstrate this point on the simple pendulum problem described in Sec.3 and shown in Fig. 1.
We evaluate the output prediction, , at 100 parameter combinations of the inputs , and
sampled from a uniform distribution, and at 100 time steps,. In this problem, we employ the BuckiNet architecture without the Buckingham Pi constraint (i.e. without ) with four inputs
, one perceptron in the first hidden layer, a dimensionless output, 3 layers with 8 perceptrons each for , and an exponential linear unit (ELU) activation function. The resulting dimensionless group in the BuckiNet layer is
which appears in the solution of the linear approximation of a harmonic oscillator as
where is a constant that depends on the initial condition. This is a case where a change of variables that gives the simplest analytical solution is dimensionless. However, this is not generally the case, especially in higher dimensional problems. To fully take advantage of the Buckingham Pi constraint, we apply the three methods proposed in Sec.3 in the following examples.
4.2 Bead on a rotating hoop
Consider a wire hoop with radius , rotating about a vertical axis coinciding with its diameter at an angular velocity , as shown in Fig. 2. A bead with mass slides along the wire with tangential damping coefficient . The equation governing the dynamics of the angular position of the bead with respect to the vertical axis is given by ( sec. 3.5)
A traditional dimensional analysis leads to the following Pi-groups 
where controls the inertial term and is a pitchfork bifurcation parameter that accounts for two additional fixed points at , for . Non-dimensionlizing equation (21) gives
For and , the system is overdamped and approximately first-order.
To test the BuckiNet and the constrained optimization algorithms, we solve the governing equations numerically to obtain 3000 solutions with mass, radius, damping coefficient, and angular velocity sampled from a uniform distribution. In order to recover and without explicitly accounting for the time scale, we use the Principal Component Analysis of time series solution matrix (where each row corresponds to a different parameter combination, and each column to a different time sample), so that the output is the set of coefficients for the leading principal components as shown in Fig. 2. Here because the angle is dimensionless. A sample of the time-series solutions for different parameters is shown in Fig. 3.
After optimizing over neural network hyperparameters – given the optimal architecture, with enough modes to represent the solution – BuckiNet recovers the correct dimensionless numbers shown in table1. The discovered Pi-groups are scaled to make the power of unity. In order to understand the limitations of the method, it is informative to look at sub-optimal solutions. For instance, for some hyperparameters (e.g. network architecture, regularization etc.) BuckiNet and the constrained optimization method give a product of either or and a Pi-group that is closely related to as a solution, such that
where is an arbitrary constant, and is the closest approximation of , given that is not included as an input. This shows that a product of multiple Pi-groups is a local minimum that satisfies the Buckingham Pi and spans the solution space in a different coordinate system. However, a hyperparameter optimization over multiple trials will consistently result in Pi-groups that closely approximate and . This motivates the use of SINDy to further constrain the dimensionless numbers to be coefficients in a differential equation where they acquire a concrete mathematical meaning as controllers of the dominant balance.
Using the dimensionless SINDy approach, we generate 20 parameter combinations, sampled from a uniform distribution – in this case, accounting for the time as an input. The library contains low-order polynomials in and that can approximate the trigonometric nonlinearity in (21) for relatively small values of . For each candidate nondimensionalization generated from the nullspace of , the nonconvex optimization problem (15) is approximated with the simple sequentially thresholded least squares algorithm , where the only tuning parameter is a threshold that approximates the regularization loss.
Fig. 4 shows that the method identifies the same time scale as that proposed by Strogatz  in equation (22), along with ratios of and which is consistent with equation (23) – divided by – such that
Depending on the sparsity threshold in SINDy, models of varying fidelity can be identified. For example, with a threshold of the algorithm selects a cubic model
while reducing the threshold to results in the seventh-order model
This closely matches the Taylor-series approximation of the true dynamics (23). Accordingly, an appropriate model can be selected based on a desired tradeoff between accuracy and simplicity. However, higher order models increase the accuracy of the discovered dimensionless groups.
Fig. 3 tests the generalization of the two models on a set of parameters chosen to be outside the range of the training data (i.e. an extrapolation task). Although the cubic model captures the qualitative behavior reasonably well, the seventh-order approximation closely tracks the true solution.
4.3 Laminar boundary layer: identifying self-similarity
Non-dimensionalization often arises in the context of scaling and collapsing experimental results to lower dimensions by revealing the self-similarity structure of the solution space. The discovery of self-similarity is considered an important result in many applications because it reveals universality properties. There are plenty of such examples in the history of fluid dynamics, particularly in understanding turbulence [1, 39].
Boundary layer theory is an example where of nondimensionalization and self-similarity have played a central role in this understanding . Using scaling analysis, Prandtl showed that the Navier-Stokes equation describing an incompressible laminar boundary layer flow can be simplified to the boundary layer equations in the streamfunction
where the subscripts denote partial differentiation in and , is the kinematic viscosity (with units of ). is defined so that the streamwise and wall normal velocities are respectively given by
Although Prandtl’s boundary layer equations are themselves a significant simplification and triumph of scaling analysis, they can be further simplified and expressed as an ordinary differential equation with the help of self-similarity. Blasius took the scaling one step further by showing that ifis a solution of (28), then so is for any dimensionless constant . This in turn implies that itself is not an independent function of and , but of the combination .
Defining the dimensionless similarity variable and streamfunction as
with freestream velocity , the boundary layer equations reduce to the nonlinear boundary value problem
Although there is no known closed-form solution to this problem, it can be analyzed with asymptotic perturbation methods or solved numerically (e.g. with a shooting method to identify an appropriate initial value for ).
In contrast to the rotating hoop example, in this case the velocity profile is more important than the differential equation itself. For instance, once the Blasius solution is known, the boundary layer profile can be found by undoing the scaling, e.g. . The most natural method to directly identify a dimensionless group associated with the boundary layer is therefore the constrained optimization approach introduced in Sec. 3.1.
Defining the output quantity of interest as the nondimensional streamwise velocity , the problem is to learn a model for , where is an input dimensionless group that can depend on , and . In this problem we seek to rediscover the Blasius scaling which was discovered analytically by Blasius and Prandtl with and .
We use kernel ridge regression (KRR) with a non-parametric radial basis function to approximate. Accordingly, the constrained optimization (11) must learn a dimensionless group in the nullspace of the units matrix that leads to a good KRR approximation of as a function of .
We generate data for this example by solving the boundary layer equations (28) via a shooting method applied to (31). The free-stream velocity is chosen to be with viscosity , close to that of water at room temperature. The resulting two-dimensional profile is shown in Fig. 5 (top), along with profiles at selected locations of . 100 points in this field are selected randomly as training data and (11) is solved with a constrained trust region method implemented in Scipy.
Specifically, for each candidate nondimensionalization, the kernel ridge regression model with radial basis function kernels (implemented in scikit-learn) is trained and evaluated on the dimensionless parameters computed from the 100 test points. The KRR model uses ridge () penalty of , an penalty of , and a scale factor of . Note that in this method, generalizing on a test case is not crucial because the model is not used to make prediction but only as a proxy for the mutual information between the candidate nondimensionalization and the quantity of interest. In other words, the method does not have to address problems of overfitting. Moreover, since only 100 points are used in training, the performance of the final model can be evaluated against the entire field ( points). The optimization problem is inexpensive but non-convex and sensitive to the initial guess, thus problem to converging to local minima. To address this issue, we run multiple optimizations with different initial guesses (around 20) and return the solution that has the minimum cost.
Fig. 5 compares the discovered nondimensionalization, , to the Blasius solution. When scaled to make the power of unity, the discovered dimensionless number is
The constrained optimization learns a different but equivalent model, in the sense that is one-to-one with .
In contrast to the brute force-type search over candidate nondimensionalizations generated from a search over vectors of integers, the exponents in this method are floating-point numbers and are arbitrary up to an overall constant. The scale of the solution is therefore a balance between the and penalties and the scale factor in the radial basis functions. Setting the scale equal to one with small penalties thus biases the algorithm towards exponents.
4.4 Rayleigh-Bénard convection: learning a normal form
Characterizing the onset and behavior of instabilities is crucial for understanding large-scale dynamical systems. Near a critical point, a reduced-order amplitude equation called the normal form can model the qualitative change in dynamics associated with a bifurcation. Although the normal form can be deduced analytically , numerically [42, 43], or via symmetry arguments , these methods become progressively more challenging with more complex systems. With the exception of the symmetry analysis, they are also invasive because they require direct access to the (discretized) governing equations. In this example, we use the SINDy method introduced in Sec. 3.3 to learn a dimensionless normal form from limited time-series data, along with the corresponding dimensional parameters.
Rayleigh-Bénard convection is a prototypical example of a system with a global instability that has been used in a range of studies of nonequilibrium dynamics, including pattern formation , chaos , and coherent structures in turbulence . The system typically consists of a Boussinesq fluid between two plates, where the lower plate is at a higher temperature than the upper plate. Heat transfer can take place via either conduction or convection, whose relative strength is encapsulated in the dimensionless Rayleigh number. Below a critical Rayleigh number, the only stable solution is steady conduction between the plates; above it, the density gradient becomes unstable to convection, as shown in the lower panels of Fig. 6.
For a fluid in the Boussinesq approximation linearized about density and temperature , the governing equations consist of the conservation of momentum, mass, and energy:
along with the boundary conditions that , . Besides the primitive variables, the system includes gravity , coefficient of thermal expansion , kinematic viscosity , and thermal diffusivity . The flow is typically analyzed in terms of the Rayleigh number and Prandtl number . Although the Prandtl number has a significant impact on the behavior of the flow above the threshold of instability , the onset of instability itself is not sensitive to the Prandtl number .
We simulate the nondimensional flow in two dimensions with no-slip boundary conditions on the wall and periodic boundary conditions in the wall-parallel () direction using the Shenfun spectral Galerkin library . We use a Fourier-Chebyshev discretization with and choose . The critical point in this simulation is at , significantly above the true value of . This is likely due to differences in boundary conditions, wall-parallel domain extent, and the restriction to two dimensions.
We simulate the system at a range of Rayleigh numbers (shown in Fig. 6) and . For each simulation, we dimensionalize the data as follows. We randomly select a value for in the range , in the range , and in the range . The separation is then computed for consistency with the simulation Rayleigh number, with . Likewise, is computed for consistency with the Prandtl number. Finally, time is dimensionalized based on a free-fall time scale . From eight initial simulation cases, we retain five points close to the bifurcation (black dots in Fig. 6) to solve the dimensionless SINDy problem and withold the other three points (red crosses) as test data.
One option for learning a model of the flow near the critical point would be to input the spatiotemporally varying data and learn a PDE model . This might yield something similar to a Ginzburg-Landau or Swift-Hohenberg model . Another approach is to approximate some quantity of interest with a linear modal decomposition such as proper orthogonal decomposition and fit an ODE model of the temporal coefficients . Given the simple globally coherent spatial structure of the flow, we proceed with the latter.
To account for translational invariance, we first represent the temperature field with a Fourier series in the -direction:
Recognizing that the weakly supercritical solution has only Fourier components (see Fig. 6), we define a real-valued observable as the wall-normal integral of the negative temperature gradient (proportional to heat flux) at :
Note that this choice is arbitrary; we could for instance model the coefficient of a particular Fourier-Chebyshev basis function, or that of a leading POD mode. All would illustrate qualitatively similar behavior, but this definition of is a simple and representative global observable.
We set up the dimensionless SINDy problem as described in Secs. 3.3. Since we expect a Landau-type dynamics model for the symmetry-breaking behavior, we choose a linear-cubic polynomial library in and search for one dimensionless group , along with an appropriate dimensionless time . Since there are only six independent dimensional quantities in this problem , it is inexpensive to perform the optimization over all dimensionless numbers composed of integer powers up to . This has the advantage of producing a simple number suitable for comparison to the classical result, but is not strictly necessary; the brute force search could be avoided with a constrained optimization approach as described in Sec. 3.1 and the boundary layer example in Sec. 4.3.
The optimal dimensionless number selected by the algorithm is the inverse of the Rayleigh number, , with dimensionless time . The dynamics model is a normal form for a pitchfork bifurcation:
with estimated critical Rayleigh number, growth rate and Landau parameter .
The fixed points of this model can be found easily and compared to the steady states of the simulation, as shown in Fig. 6. The model closely matches the steady states not only of the training, but also of the withheld test points much farther from the bifurcation (shown as red crosses). Of course, this is a simple model of a well-understood instability, but the ability to directly derive normal forms from dimensional data opens the possibility of modeling more complex bifurcations, including cases where the control parameters are not well established.
Finally, we’ve shown that data can be used to discover an “optimal” set of dimensionless numbers that honor the Buckingham Pi theorem according to hypothesis 1. However, the definition of that optimality is non-unique, leaving the open question of how to find the best way to integrate the data with the Buckingham Pi theorem. In particular, the methods introduced in Sec. 3 all seek to identify dimensionless groups for which the corresponding function approximation (KRR, BuckiNet, or SINDy) can best represent the input-output relationship . Choosing a SINDy representation of this relationship encodes an inductive bias towards low-order polynomials, which is often appropriate in dynamical systems applications or asymptotic expansions. In these contexts, the identified parameters are more likely to have a familiar structure, such as the ratio of different terms in a governing equation. On the other hand, the optimal dimensionless parameters as seen by more flexible and general representations such as neural networks or kernel regression may be completely different than those derived by classical analytic approaches giving insight into scaling Pi-groups that cannot be derived by hand. For instance, the Blasius solution cannot be easily represented with low-order polynomials, but at the cost of interpretability of the function, kernel regression is able to accurately model the boundary layer with the nonstandard nondimensionalization in equation (32).
More generally, dimensional analysis is a commonly used method for physics discovery and characterization. Importantly, it highlights the fact that physical laws do not depend on any specific units of measurements. Rather, physical laws are fundamental and have the property of generalized homogeneity , thus they are independent of the observer. Advancements across scientific disciplines have used such concepts to develop governing equations, from gravitation to electromagnetism. Specifically, dimensionality reduction has identified critical symmetries, parametrizations, and bifurcation structures underlying a given model. As such, dimensionality reduction has always been a critical component of establishing the canonical models across disciplines. However, when encountering new physics-based systems where the governing equations are unknown or their parametrizations undetermined, dimensionality reduction once again is critical for determining, the general form to which any physical equation is reducible .
Edgar Buckingham provided the rigorous mathematical foundations on which dimensional analysis is accomplished. It is a constructive procedure which significantly constrains the space of hypothesized models. Although greatly constraining the allowable model space, Buckingham Pi theory does not produce a unique model, but rather a small subset of parametrizations which are then typically chosen from expert knowledge or asymptotic considerations . With modern machine learning, we have shown that the Buckingham Pi theory procedure can be automated to discovery a diversity of important physical features and properties. In particular, we have shown that depending on the objective, there are three distinct methods by which we can produce a model through dimensional analysis. Each method is framed as an optimization problem which either imposes hard, soft or no constraints on the null-space of the units matrix, or optimizes the robustness, accuracy and speed of the model selected. Specifically, we develop three data-driven techniques that use the Buckingham Pi theorem as a constraint: (i) a constrained optimization problem with a non-parametric input-output fitting function, (ii) a deep learning algorithm (BuckiNet) that projects the input parameter space to a lower dimension in the first layer, and (iii) a sparse identification of nonlinear dynamics (SINDy) based technique to discover dimensionless equations whose coefficients determine important features of the dynamics, such as inertia and bifurcations. Such regularizations solve the ill-posed nature of Buckingham Pi theory and its non-unique solution space. We demonstrated the application of each method on a set of well-known physics problems, showing that without any prior knowledge of the underlying physics, the various architectures are able to extract all the key concepts from the data alone.
The suite of algorithms introduced provide a set of physics-informed learning tools that can help characterize new systems and the underlying general form to which the physical system is reducible, regardless of whether governing equations are known or not. This includes extracting in an automated fashion, the system’s symmetries, parametric dependencies and potential bifurcation parameters. Although modern machine learning can simply learn accurate representations of input and output relations, the imposition of Buckingham Pi theory allows for interpretable or explainable models. Explainable AI, especially in the context of physics-based systems, has grown in importance as it is imperative to understand the feature space on which modern AI empowered autonomy, for instance, makes decisions with guaranteed safety margins.
The authors acknowledge support from the Army Research Office (ARO W911NF-19-1-0045) and the National Science Foundation AI Institute in Dynamic Systems (Grant No. 2112085). JLC acknowledges funding support from the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program.
-  Grigory Isaakovich Barenblatt. Scaling, self-similarity, and intermediate asymptotics: dimensional analysis and intermediate asymptotics. Number 14. Cambridge University Press, 1996.
-  Susan G Sterrett. Physically similar systems-a history of the concept. In Springer handbook of model-based science, pages 377–411. Springer, 2017.
-  Edgar Buckingham. On physically similar systems; illustrations of the use of dimensional equations. Physical review, 4(4):345, 1914.
-  Zachary del Rosario, Minyong Lee, and Gianluca Iaccarino. Lurking variable detection via dimensional analysis. SIAM/ASA Journal on Uncertainty Quantification, 7(1):232–259, 2019.
-  Lluís Jofre, Zachary R del Rosario, and Gianluca Iaccarino. Data-driven dimensional analysis of heat transfer in irradiated particle-laden turbulent flow. International Journal of Multiphase Flow, 125:103198, 2020.
-  Kai Fukami and Kunihiko Taira. Robust machine learning of turbulence through generalized buckingham pi-inspired pre-processing of training data. In APS Division of Fluid Dynamics Meeting Abstracts, pages A31–004, 2021.
-  Xiaoyu Xie, Wing Kam Liu, and Zhengtao Gan. Data-driven discovery of dimensionless numbers and scaling laws from experimental measurements. arXiv preprint arXiv:2111.03583, 2021.
-  E. Cerda and L. Mahadevan. Geometry and physics of wrinkling. Phys. Rev. Letters, 90(7):074302, 2003.
-  S. W. Morris, E. Bodenschatz, D. S. Cannell, and G. Ahlers. Spiral defect chaos in large aspect ratio Rayleigh-Bènard convection. Physical Review Letters, 71(13):2026, 1993.
-  X. D. Shi, M. P. Brenner, and S. R. Nagel. A cascade of structure in a drop falling from a faucet. Science, 265(5169):219–222, 1994.
-  B. Grzybowski, H. A. Stone, and G. M. Whitesides. Dynamic self-assembly of magnetized, millimetre-sized objects rotating at a liquid-air interface. Nature, 405:1033–1036, 2000.
-  A. Seminara, T. E. Angelini, J. N. Wilking, H. Vlamakis, S. Ebrahim, R. Kolter, D. A. Weitz, and M. P. Brenner. Osmotic spreading of bacillus subtilis biofilms driven by an extracellular matrix. Proceedings of the National Academy of Sciences, 109(4):1116–1121, 2012.
-  M. C. Cross and P. C. Hohenberg. Pattern formation outside of equilibrium. Reviews of modern physics, 65(3):851, 1993.
-  Jared L Callaham, James V Koch, Bingni W Brunton, J Nathan Kutz, and Steven L Brunton. Learning dominant physical processes with data-driven balance models. Nature communications, 12(1):1–10, 2021.
-  Philip Holmes and John Guckenheimer. Nonlinear oscillations, dynamical systems, and bifurcations of vector fields, volume 42 of Applied Mathematical Sciences. Springer-Verlag, Berlin, Heidelberg, 1983.
-  Or Yair, Ronen Talmon, Ronald R Coifman, and Ioannis G Kevrekidis. Reconstruction of normal forms by learning informed observation geometries from data. Proceedings of the National Academy of Sciences, page 201620045, 2017.
-  Manu Kalia, Steven L Brunton, Hil GE Meijer, Christoph Brune, and J Nathan Kutz. Learning normal form autoencoders for data-driven discovery of universal, parameter-dependent governing equations. arXiv preprint arXiv:2106.05102, 2021.
-  Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. science, 324(5923):81–85, 2009.
-  Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences, 113(15):3932–3937, 2016.
-  Samuel H Rudy, Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Data-driven discovery of partial differential equations. Science Advances, 3(4):e1602614, 2017.
-  Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3):218–229, 2021.
-  Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.
-  George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed machine learning. Nature Reviews Physics, 3(6):422–440, 2021.
-  Frank Noé, Simon Olsson, Jonas Köhler, and Hao Wu. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. Science, 365(6457):eaaw1147, 2019.
-  MP Brenner, JD Eldredge, and JB Freund. Perspective on machine learning for advancing fluid mechanics. Physical Review Fluids, 4(10):100501, 2019.
-  Karthik Duraisamy, Gianluca Iaccarino, and Heng Xiao. Turbulence modeling in the age of data. Annual Reviews of Fluid Mechanics, 51:357–377, 2019.
-  Steven L. Brunton, Bernd R. Noack, and Petros Koumoutsakos. Machine learning for fluid mechanics. Annual Review of Fluid Mechanics, 52:477–508, 2020.
-  M. Sonnewald, R. Lguensat, D. C. Jones, P. D. Dueben, J. Brajard, and V. Balaji. Bridging observations, theory, and numerical simulation of the ocean using machine learning. Environmental Research Letters, 16(7):073008, 2021.
-  B. E. Kaiser, J. A. Saenz, M. Sonnewald, and D. Livescu. Objective discovery of dominant dynamical processes with intelligible machine learning. arXiv:2106.12963, 2021.
-  M. Sonnewald, C. Wunsch, and P. Heimbach. Unsupervised learning reveals geography of global ocean regimes. Earth and Space Science, 6:784–794, 2019.
-  Hao Wu, Andreas Mardt, Luca Pasquali, and Frank Noe. Deep generative markov state models. 32nd Conference on Neural Information Processing Systems (NeurIPS), 2018.
-  K. Champion, B. Lusch, J. Nathan Kutz, and Steven L. Brunton. Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences, 116(45):22445–22451, 2019.
-  Joseph Bakarji, Kathleen Champion, J Nathan Kutz, and Steven L Brunton. Discovering governing equations from partial measurements with deep delay autoencoders. arXiv preprint arXiv:2201.05136, 2022.
-  S. L. Brunton and J. N. Kutz. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge University Press, 2019.
-  Paul G Constantine, Zachary del Rosario, and Gianluca Iaccarino. Data-driven dimensional analysis: algorithms for unique and relevant dimensionless groups. arXiv preprint arXiv:1708.04303, 2017.
-  Silviu-Marian Udrescu and Max Tegmark. Ai feynman: A physics-inspired method for symbolic regression. Science Advances, 6(16):eaay2631, 2020.
-  David J Gunaratnam, Taivas Degroff, and John S Gero. Improving neural network models of physical systems through dimensional analysis. Applied Soft Computing, 2(4):283–296, 2003.
-  Steven H Strogatz. Nonlinear dynamics and chaos: With applications to physics, biology, chemistry, and engineering. CRC press, 2018.
-  S. B. Pope. Turbulent Flows. Cambridge University Press, 2000.
-  H. Schlichting. Boundary-Layer Theory. McGraw-Hill, 1955.
-  J. Guckenheimer and P. Holmes. Nonlinear oscillations, dynamical systems, and bifurcations of vector fields. Springer, 1983.
-  P. Meliga, J.-M. Chomaz, and D. Sipp. Global mode interaction and pattern selection in the wake of a disk: a weakly nonlinear expansion. Journal of Fluid Mechanics, 633:159–189, 2009.
-  M. Carini, F. Auteri, and F. Giannetti. Centre-manifold reduction of bifurcating flows. J. Fluid Mech., 767:109–145, feb 2015.
-  M. Golubitsky and W. Langford. Pattern formation and bistability in flow between counterrotating cylinders. Physica D, 32:362–392, 1988.
-  E. N. Lorenz. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20(2):130–141, Mar 1963.
-  A. Pandey, J. D. Scheel, and J. Schumacher. Turbulent superstructures in Rayleigh-Bènard convection. Nature Communications, 9(2118), 2018.
-  S. Chandrasekhar. Hydrodynamic and Hydromagnetic Stability. Clarendon Press, Oxford, 1961.
Shenfun: High performance spectral galerkin computing platform.
Journal of Open Source Software, 3(31):1071, 2018.
-  J.-C. Loiseau and S. L. Brunton. Constrained sparse Galerkin regression. Journal of Fluid Mechanics, 838:42–67, 2018.