Integrated intelligent Jaya Runge-Kutta method for solving Falkner-Skan equations for Various Wedge Angles

10/09/2020
by   Hongwei Guo, et al.
Ton Duc Thang University
uni hannover
0

In this work, the hybrid intelligent computing method, which combines efficient Jaya algorithm with classical Runge-Kutta method is applied to solve the Falkner-Skan equations with various wedge angles, which is the fundamental equation for a variety of computational fluid mechanical problems. With some coordinate transformation, the Falkner-Skan boundary layer problem is then converted into a free boundary problem defined on a finite interval. Then using higher order reduction strategies, the whole problem can be boiled down to a solving of coupled differential equations with prescribed initial and boundary conditions. The hybrid Jaya Runge-Kutta method is found to yield stable and accurate results and able to extract those unknown parameters. The sensitivity of classical shooting method to the guess of initial values can be easily overcome by an integrated robust optimization method. In addition, the Jaya algorithm, without the need for tuning the algorithm-specific parameters, is proved to be effective and stable for minimizing the fitness function in application. By comparing the solutions using the Jaya method with PSO (particle swarm optimization), Genetic algorithm (GA), Hyperband, and the classical analytical methods, the hybrid Jaya Runge-Kutta method yields more stable and accurate results, which shows great potential for solving more complicated multi-field and multiphase flow problems.

READ FULL TEXT VIEW PDF

page 17

page 24

page 26

12/02/2021

The numerical solution of semidiscrete linear evolution problems on the finite interval using the Unified Transform Method

We study a semidiscrete analogue of the Unified Transform Method introdu...
03/14/2020

The Iterative Transformation Method

In a transformation method, the numerical solution of a given boundary v...
11/13/2020

Numerical Transformation Methods for a Moving-Wall Boundary Layer Flow of a Rarefied Gas Free Stream over a Moving Flat Plate

The first contribution of this paper is the extension of the non-iterati...
03/18/2021

Spectral Monic Chebyshev Approximation for Higher Order Differential Equations

This paper is focused on performing a new method for solving linear and ...
08/18/2021

High accuracy power series method for solving scalar, vector, and inhomogeneous nonlinear Schrödinger equations

We develop a high accuracy power series method for solving partial diffe...
12/03/2019

System of Lane-Emden equations as IVPs BVPs and Four Point BVPs Computation with Haar Wavelets

In this work we present Haar wavelet collocation method and solve the fo...
01/01/2022

Analytical Shaping Method for Low-Thrust Rendezvous Trajectory Using Cubic Spline Functions

Preliminary mission design requires an efficient and accurate approximat...

1 Introduction

The boundary layer flow problems over a stretching or shrinking surface have significant applications in many industrial and technological fields, such as cooling of an infinite metallic plate in a cooling bath, an aerodynamic extrusion of plastic sheets, and extraction of polymer and rubber and glass-fiber. The Falkner-Skan problem is a notable similarity solution to the steady two-dimensional laminar boundary layer equations, including the Blasius and stagnation point solutions.

Historically, the Falkner-Skan equation was initially developed by Falkner and Skan in 1931 [falkneb1931lxxxv], and it plays an essential role in the fluid mechanics of boundary viscous flow. It is derived from the two-dimensional incompressible Navier-Stokes equations. It exists in many forms with varying values of and , and its solution describes the forms on a wedge for the steady two-dimensional laminar boundary layer. In the past years, there are many investigations for the Falkner-Skan equation with numerical and analytical methods. The existence and uniqueness results were found by Rosenhead (1963), Weyl (1942), Hartman (1972), and Tam (1970) [duque2011numerical]. The shooting and invariant imbedding is the earliest computational method, which is proposed by Hartree, Smith, Cebeci and Keller, Na [asaithambi1998finite]. There are other numerical methods such as finite-element method by Asaithambi [asaithambi2005solution], the modification of the classical Newtons method by Zhang and Chen, etc.[zhang2013particle].

Because of limitations in the accuracy and efficiency of those classic and numerical methods in solving Falkner-Skan equations, the heuristic algorithms have been successfully applied to solve the nonlinear problems in engineering and applied science

[malik2015numerical, ullah2018evolutionary]. The heuristic algorithms are used to find a good solution effective and quickly, which are essentially the trial and error methods. Many researchers have devoted their attentions to study the heuristic algorithms and have thus developed a series of heuristic algorithms. The genetic algorithm(GA) is proposed by Vakil-Baghmisheh et al [vakil2008crack] and Liu and Jiao[liu2011application]. The particle swarm optimization(PSO) is found by Yu and Chen[yu2010bridge], Nanda,Maity,and Maiti[nanda2014crack]. Zare Hosseinzadeh et al. proposed cuckoo search (CS) algorithm [hosseinzadeh2014flexibility]. The evolutionary intelligence algorithms are proposed by Bagheri, Razeghi, and Ghodrati Amiri [bagheri2012detection]. Recently, Li et al. [li2017hyperband]

developed a hyperband optimization method based on a novel bandit-based approach. The evolutionary computing algorithms have been proved to be effective to solve PDEs. Tsoulos and Lagaris

[tsoulos2006solving], Saini et al. [saini2013genetic]

et al. applied genetic algorithm for solving partial differential equations. Babaei

[babaei2013general], Yadav et al. [yadav2017efficient]

applied particle swarm method for solving PDEs. Also, the deep neural networks based energy method and collocation method have been applied to solving partial differential equations

[guo2019deep, lin2020deep, nguyen2019deep, anitescu2019artificial, samaniego2020energy]. However, the major drawback of those previous soft computing techniques, as well as the meta-heuristic, and evolutionary optimizations is the possibility of converging to solutions that may not be optimal but instead are trapped at a local optimal value [wadood2019application].

Until recently, an advanced optimization algorithm called Jaya (a Sanskrit word meaning victory) was proposed by Venkata Rao in 2016 [rao2016Jaya], which is easy in implementation and does not need any algorithm-specific control parameters. The Jaya algorithm can solve both constrained and unconstrained optimization problems, and it is proved to converge to the global optimum values requiring less numbers of iterations and computational time [wadood2019application]. From the results we can prove that the Jaya algorithm is more effective, accurate and stable than other optimization algorithms.

In this work, we first describe the mathematic model for Falkner-Skan problem and apply a transformation and order reduction to Falkner-Skan equation to obtain a system of coupled differential equation with prescribed boundary conditions. Then the optimization algorithms involved in the application are brief presented. The Jaya method is integrated with the classical Runge-Kutta method and used to solve the Falkner-Skan equation with various wedge angles. Also, as a comparison, the newly proposed hyperband method is also firstly applied to solve partial differential equations. The Runge-Kutta method combined with the Jaya algorithm is used to solve the Falkner-Skan equation in following manner:

1. Using a coordinate transformation the semi-inifinite domain of Falkner-Skan equation can be changed into the finite unit interval.

2. Using the Runge-Kutta method the solving of the Falkner-Skan equation can be converted into finding the initial values and .

3. Based on the fitness function , the optimization algorithm find the optimal values of and .

4. The optimal initial values of and are used for finding the solution of the Falkner-Skan equation with the multistep method.

2 Mathematical models for Falkner-Skan equation

In the case of the steady state boundary layer flow of an incompressible viscous fluid over a wedge, the equations of continuity in two-dimensional motions and their boundary conditions [schlichting2016boundary] can be referred as: The boundary layer equation for the steady flow of an incompressible viscous fluid over a wedge in two-dimensional is defined as:

(1)
(2)

where and are velocity components in and direction of the fluid flow, is the viscosity, the velocity at the edge of the boundary layer. Assuming the velocity of the ambient flow and moving wedge flow is a power law free stream velocity, , where is uniform free stream velocity, L is the length of wedge, is measured from the top of the wedge and m is the Falkner-Skan power-law parameter.

The relevant boundary conditions are given by:

(3)

The continuity Equation 1 is automatically satisfied by the stream function , thus obtaining:

(4)

and from momentum Equation 2 yields

(5)

Using the similarity transformation, two dimensionless variables can be yielded:

(6)

, with a dimensionless stream function and

a dimensionless distance. Then the partial differential equation can be reduced to a third order nonlinear ordinary differential equation:

(7)

The associated boundary conditions are given by:

(8)

and

(9)

where and are constants, is the angle of the wedge. is related to the Falkner-Skan power-law parameter by .

(a)
(b)
(c)
(d)
Figure 1: Geometry of Falkner-Skan flow over a wedge. a) Domain for the potential flow solution for with nonlinear velocity . b) Domain for the Falkner-Skan boundary layer. c) and d) are cases.

The Geometry of Falkner-Skan flow over a wedge is shown in Figure 1, which describes the potential flow and thin viscous boundary layer regions. The Falkner–Skan equation 7 governing this phenomenon describes a nonlinear, third-order, free boundary value problems, where defines the dimensionless velocity component indirection and the dimensionless shear stress in the boundary layer. Moreover, the solution of this equation is sought in a semi-infinite domain and one of the boundary conditions is asymptotically assigned on the first derivative at infinity. Thus, it is not easily to construct a closed-form solution for this two-point boundary value problem.

3 Transformation and order reduction

First of all, we replace the boundary condition at in Equation 9 with a free boundary condition as Asaithambi did in [asaithambi2005solution]:

(10)

where is an unknown truncated boundary (a free boundary). The whole problem is then converted into a free boundary problem defined on a finite interval, where the “sufficiently large” is determined as part of the solution.

To ensure the impose of asymptotic boundary condition, additionally boundary conditions need to be added for the semi-infinite physical domain [asaithambi2005solution]:

(11)

If a shooting algorithm is imposed to solve this free boundary problems governed by Equation  7 subjected to boundary conditions Equations 8, 10, and 11, an initial value condition is added at :

(12)

It is obvious that different values of will lead to different values of as approaches infinity. The value is generally used to characterize solutions of Falkner–Skan equation with different and , and it will be used for model evaluation. It is well known that the shooting method can fail to converge for problems whose solutions are very sensitive to initial conditions. Many strategies have been proposed to tackle this problem [holsapple2004new]. A new, simple and straightforward approach has been proposed to the whole scheme, which deals with the unknown initial boundary conditions and the free boundary with a robust optimization method.

For simplicity, a coordinate transformation is first placed:

(13)

which transforms the physical domain from to , and yields Falkner-Skan equation:

(14)

and the boundary conditions are changed correspondingly:

(15)

In order to solve the Falkner-Skan equation and remove the and from above equations, the reduction of order technique is applied:

(16)

and the Falkner-Skan equation becomes

(17)

where and the initial conditions conditions are:

(18)

the boundary conditions conditions are:

(19)

4 The Optimization Algorithm

In this part, we will briefly introduce some classical heuristic algorithms and newly developed optimization algorithms, which will be used in the numerical example comparisons.

4.1 Pso

Inspired by social intelligent behaviors of the gregarious birds the Particle swarm optimization (PSO) is proposed by Kennedy and Eberhar t[kennedy1995particle]. PSO is a heuristic searching process. The search space of the optimization problem is similar to the flying space of birds, and the search process of the optimal solution can be described by the process of birds searching for food. In the process, the local optimum position and the global optimal position in the population are obtained, with the help of current optimal particles the velocity of each particle will be adjusted to search for the optimal solution. SPO is a classical swarm intelligence approach. SPO has many advantages, its calculation method is relatively simple, and the method needs fewer control parameters, it can obtain a accurate optimal solution. But PSO also has a problem, due to the lack of individual diversity, it is easy to fall into local optimization with the iteration of the search process.

Assuming that a swarm has P particles, the position vector and the velocity vector at t iteration are

and , the vector is updated by following equation:

(20)

where i=(1,2,…P) is the particle, and j=(1,2,…n) is the dimension, w is the inertia weight constant, which is a positive. The process of the PSO [le2019hybrid] is presented in Fig. 2.

Figure 2: The Particle Swarm Optimization

4.2 Hyperband Algorithm

Hyperband is a SuccessiveHalving Algorithm. The process can be described as follow. In the search space, we sample hyper-parameter sets randomly, after interactions the validation loss is evaluated, then we discard the lowest performers of the parameters and keep a half. in the next step, we use the good parameters to run for

interactions, and discard a half. The process will be repeated until we have only one model. It’s a variation of random search, and it can assign adaptive pre-defined parameters to a randomly sampled configuration automatically. In the process of optimization, Hyperband can find the best time allocation for each configuration. Compared with Bayesian Optimization methods on several hyperparameter optimization problems, the Hyperband can find the optimal solution with faster speed, the performance of the optimization is good, and it can parallelize the operating parameters. The process of Hyperband Algorithm

[kainz2019efficient] is presented in Table. 1.

input : R,
1 initialization: and ;
2  for do
3    , ;
4    = gethyperparameterconfiguration(n);
5    for do
6      , ;
7      ;
8       ;
9     end;
10  end;
11  output: configuration with lowest validation loss seen so far;
Table 1: Hyperband ALgorithm

4.3 Genetic Algorithm

Based on the genetic and evolutionary processes of nature the GA is proposed. With the fitness function, the GA selects the parameters by selection, crossover, and mutation. In the process, the parameters with a good solution are retained and the parameters with a bad solution are eliminated, then the new populations are inherited. The solution of the new generation is better than the solution of the previous generation. We repeat the process until the optimization criteria are satisfied, in the end we can find the optimal solution. The process of the Genetic Algorithm is shown in Fig. 3.

Figure 3: Genetic Algorithm

The Genetic Algorithm has some advantages, compared to the traditional methods it’s more efficient, it has good parallel capabilities, it can optimize both continuous and discrete functions and also multi-objective problems, and it is suitable for the problem with the large search space and a large number of parameters. There are also some disadvantages for the Genetic Algorithm, The calculation for some problems is expensive due to the fitness value is calculated repeatedly, and the calculation time is relatively long. The Genetic Algorithm is hard to guarantee the optimality or the quality of the solution.

4.4 Jaya Algorithm

Recently the Jaya algorithm becomes a popular optimization method, it can solve different types of optimization problems, for example, Rao used the Jaya algorithm to solve the constrained and unconstrained problems. The word Jaya comes from Sanskrit, which means victory. When a particular problem is solved using the Jaya algorithm, we can get the best result, and avoid the worst result. The Jaya algorithm is based on the population-based method, in the process, it is used to modify individual solutions repeatedly, and then use the best individual solutions to modify the population solution, and in the end the best solution can be obtained. The Jaya algorithm is a gradient-free optimization algorithm. Compared with other algorithms, the Jaya algorithm isn’t limited by hyper-parameters, and it only has two common parameters, that is, population size and the number of iterations. So Jaya algorithm is simple to find out the optimal solution and does not need to adjust any algorithm-specific parameters. The Jaya algorithm has been applied for many fields. such as B Chattopadhyay used the Jaya algorithm in the area of modern machining processes[chattopadhyay1996line], W Warid ad H Hizam used the Jaya algorithm to solve an optimal power flow solution [warid2016optimal]. The Jaya algorithm was used for the heat exchangers by Rao and R Venkata [rao2018multi].

The Jaya algorithm obtain the optimal values by minimizing the objective function, we suppose the objective function is defined, and it has n-dimensional factors,

is the estimation value, i is the position of the candidate solution. The variables are updated using Eq. (

21).

(21)

where and are the best and worst solutions in the current population. and are random numbers in the range of [0,1], which are used as scaling factors. The scaling factors attract the best solution and improve the worst of the update in each iteration. In this entire procedure, the solution moves closer to the best result and moves away from the worst solution.

With the Jaya algorithm, the objective function value gradually approaches the optimal solution by updating the value of the variable. In the process, the fitness of each candidate solution in the population is improved. The process of the Jaya algorithm is presented in following flowchart.

Figure 4: Flowchart of Jaya algorithm

The performance of Jaya algorithm is reflected by the minimum function estimation.

5 The hybrid Jaya-Runge-Kutta Algorithm

As is mentioned above, the shooting method can be sensitive to initial boundary conditions or unknown parameters in equations [butcher2016numerical], especially the guess of unknown initial values and in Equations from Equations 17 to 19. The advancement of heuristic algorithms, however, opens a new door to deal with this type of problem easily and accurately.

5.1 The Runge–Kutta method

The Runge–Kutta method is a popular numerical analysis method, which solves the differential equation using a one-step nonlinear approximation. With its simplicity and efficiency, the method is widely used for solving the initial-value problems of differential equations. The Fourth-order Runge-Kutta method is shown in Fig.5, which is chosen as illustration. In each step, we evaluate the derivative four times, the first time is at the initial point, the second and third times are at trial midpoints, and the last time is at a trial endpoint , then the final function value is calculated with these derivatives, the final value is shown as a filled dot.

Figure 5: Fourth-order Runge-Kutta method

Compared with other methods, the Fourth-order Runge-Kutta method is simple and robust scheme.

Consider the initial-boundary value problem from Equations 17 to 19, with a step-size h, the following recurrence formula can be obtained:

(22)

where can be retrieved from Equation 17.

5.2 The hybrid Jaya-Runge–Kutta method

Unlike the classical shooting method, we start with a parameter interview as an initial guess to find appropriate values of and . We first begin by defining the objection function for Jaya algorithm.

(23)

Since and are the numerical approximate boundary values, the objective function for Jaya algorithm can be reduced to:

(24)

Then the Jaya algorithm is deployed to minimize the objective function with 20 as the size of initial population and a 100 maximum numbers of iterations to algorithm as termination criterion.

is an objective function of , supposing that is the estimation value of th valable for th competitor solution, where . Therefore, is the position of th candidate solution. Let the best competitor solution gives the best estimation of in the present populace and the worst candidate solution obtains the worst value of in the present populace. Then the solution is modified based on the best and worst solution as:

(25)

where is the update value of . With Jaya optimization technique, the achieved solution moves closer to the finest result and starts moving away from the worst solution.

In what follows, the pseudo-code of the hybrid Jaya-Runge-Kutta algorithm is illustrated in Table. 2, the method is used to solve the proposed optimization problem.

Initialize
 randomly initialize , i=1,2,…,N(Population size)
while Termination criterion is not met do
  calculate and by Runge-Kutta method
  evaluate the fitness function in Equation 24, i=1,2,…,N
   for i in range(N) do
    find and .
    for j in range(2) do
     update the variables by Eq.25
     next j(dimension of variables)
    end
    next i
   end
  next generation until termination criterion satisfied
end while
Table 2: Pseudo-code

6 Numerical results

Next, the hybrid method is used to solve the transformed optimization problem, the Jaya algorithm is used to find the global minimum of the fitness function. For the best parameters and identified by Jaya algorithm, Runge-Kutta method is then applied to solve the coupled differential equations. In order to test the ability of Jaya algorithm, the parameters identified by optimization methods are compared with results from open literatures [zhang2009iterative, asaithambi2005solution]. The numerical results for the stream function , velocity profile , and skin friction coefficient are illustrated for specific flow problems. The velocity profile along the coordinates are compared with reference solutions from [ahmad2017stochastic], to show the accuracy of present method compared with other robust optimization methods.

For different coefficients and , the Falkner-Skan problem can be divided into the following four categories:

  • Blasius equation:

  • Homann problem:

  • Accelerating flows:

  • Decelerating flows:

6.1 Case1: Blasius equation with =0.5 and =0

Figure 6: Convergence of fitness for and = 0

In this case, Falkner–Skan equation is known as Blasius equation. The solution is obtained using the Runge-Kutta method integrated with the Jaya algorithm, the PSO algoritm, the Hyperband algorithm, the GA and reference solution from [ahmad2017stochastic]. First, the convergence history graph of the fitness function over 100 iterations is presented in Figure 6. It is clearly the fitness function can reach its global minimum after some iterations, which proves the ability of Jaya algorithm as a global optimizer. Fig. 7 shows the graph of stream function , its velocity and skin friction coefficient, where the velocity profile goes asymptotically to 1 and the skin friction coefficient goes asymptotically to 0, which are exactly the cases for Equations 9 and 11. From the Table. 3, the solutions from different optimization methods including Jaya, PSO, Hyperband and GA are compared with the reference solution. It is clearly that our method agrees very well with the reference solution. To be more clear, the absolution error for those listed optimization methods along the are presented Fig. 8. It can be observed that the Jaya algorithm obtains the most stable and accurate results.

Figure 7: The stream function of Blasius equation and its derivatives corresponding to =0.5, =0
Jaya PSO Hyperband GA
0.1 0.3892354 0.38924351 0.38922149 0.38925129 0.38922613
0.2 0.72239923 0.72241324 0.72237523 0.72235928 0.72238323
0.3 0.91901667 0.91903214 0.91899015 0.91903766 0.91899899
0.4 0.98646615 0.9864808 0.98644101 0.98647555 0.98644939
0.5 0.99877239 0.99878654 0.99874812 0.99877457 0.99875621
0.6 0.9999285 0.99994256 0.9999044 1.00002937 0.99991243
0.7 0.99998388 0.99999793 0.99995978 1.00008464 0.99996781
0.8 0.99998522 0.99999928 0.99996113 1.00008598 0.99996916
0.9 0.99998524 0.99999929 0.99996115 1.000086 0.99996918
1 0.99998524 0.99999929 0.99996115 1.000086 0.99996918
Table 3: Comparison of proposed results with reference solution for Blasius equation
Figure 8: Comparison of velocity of proposed results for Blasius equation

6.2 Case2: Homann flow with

In this section, the Falkner–Skan problem is considered as Homann steady flow. Also, the solution is obtained with the help of Jaya, PSO, Hyperband, GA integrated Runge-Kutta method. First, the convergence history graph of the fitness function over 100 iterations is presented in Figure 9. Also, the value of fitness function drops down significantly. Fig. 10 shows the graph of stream function , its velocity and skin friction coefficient gained with Jaya optimization method. From Table. 5, the solutions from different optimization methods including Jaya, PSO, Hyperband and GA are compared with the reference solution [ahmad2017stochastic]. Still our method obtains the most agreeable results. Moreover, the absolution errors for those listed optimization methods along the are presented Fig. 18. It can be concluded that the Jaya algorithm obtains the most stable and accurate results and far outweighs the other optimization method.

Figure 9: Convergence of fitness for and = 1
Figure 10: The stream function for Homann flow and its derivatives corresponding to =2, =1
Jaya PSO Hyperband GA
0.1 0.51841208 0.51841111 0.5183927 0.51837284 0.51842661
0.2 0.81685809 0.81685613 0.81681886 0.81677866 0.81688751
0.3 0.9483164 0.94831334 0.94825528 0.94819263 0.94836223
0.4 0.98969146 0.98968714 0.98960504 0.98951645 0.98975628
0.5 0.99859956 0.99859383 0.998485 0.99836758 0.99868548
0.6 0.99988001 0.99987281 0.99973603 0.99958845 0.999988
0.7 1.00000237 0.99999368 0.99982862 0.99965052 1.00013269
0.8 1.00001151 1.00000133 0.99980793 0.99979926 1.00016419
0.9 1.00001353 1.00000186 0.99978012 0.99964087 1.00018858
1 1.00001527 1.00000211 0.99975203 0.9996822 1.00021269
Table 4: Comparison of proposed results with reference solution for Homann flow
Figure 11: Comparison of velocity of proposed results for Homann flow

6.3 Case3: Accelerating flows with

The accelerating flows are studied in this section. Different values are selected. First, the convergence history graph of the fitness function over 100 iterations for different parameters is presented from Figures 12 to 15. Also, the value of fitness function can reach a very low level at the end of the iterations.

Figure 12: Convergence of fitness for and = 0
Figure 13: Convergence of fitness for and = 0.5
Figure 14: Convergence of fitness for and =1
Figure 15: Convergence of fitness for and =2

Fig. 16 shows the velocity profile for those selected cases gained with Jaya optimization method. With the increasing of , it is observed that the horizontal velocity profiles go asymptotically to 1, which verifies the asymptotical boundary condition shown in Equation 9. In addition, for acceleration flows, we can observe that as the parameter increases, the boundary layer thickness increases, and eventually tends to one as the distance increases from the initial boundary.

Figure 16: The velocity profile corresponding to different when

6.3.1 Hiemenz flow problem for =1 and =1

To be more specific, two classic cases Hiemenz flow and Homann axisymmetric stagnation flow, are in comparison with Jaya, PSO, Hyperband, GA, and the reference solution [ahmad2017stochastic]. In Tables 5, the results obtained with Jaya method agrees pretty well with the reference solution [ahmad2017stochastic]. To be more clear, the absolution error for those listed optimization methods along the are presented Fig. 18. For this case, though Jaya optimizer is not the best, it still gives very accurate results at most points.

Figure 17: The stream function for Hiemenz flow and its derivatives corresponding to =1, =1
Jaya PSO Hyperband GA
0.1 0.73864188 0.73864962 0.73861094 0.73855389 0.73863705
0.2 0.9583553 0.95837589 0.95827293 0.95812107 0.95834243
0.3 0.99623432 0.99627939 0.99605404 0.99572166 0.99620615
0.4 0.99975182 0.99983548 0.99941718 0.99880024 0.99969953
0.5 0.99987682 1.00001317 0.99933143 0.99892602 0.9997916
0.6 0.99982269 1.00002568 0.9990108 0.99901422 0.99969582
0.7 0.99975239 1.00003594 0.99911834 0.99912817 0.99957517
0.8 0.99966988 1.00004792 0.99962806 0.99937203 0.99943362
0.9 0.99957522 1.00006166 0.99973003 0.99944605 0.99927122
1 0.9994684 1.00007717 0.99983429 0.99995043 0.99908797
Table 5: Comparison of proposed results with reference solution for Hiemenz flow
Figure 18: Comparison of velocity of proposed results for Hiemenz flow

6.3.2 Homann axisymmetric stagnation flow for =1 and =0.5

For the Homann axisymmetric stagnation flow, the stream function , its velocity and skin friction coefficient gained with Jaya optimization method are first shown in Figure 19. Also, it can be observed that with the increase of , the velocity profile goes asymptotically to 1, which verifies the asymptotical boundary condition shown in Equation 9 and the skin friction coefficient goes asymptotically to 0, which is just the case for Equation 11. In detail, the velocity profiles gained by all optimization methods are compared with reference solution [ahmad2017stochastic] shown in Table 7. The results obtained by Jaya method are in excellent agreement with the reference solution. For better comparison, the absolution error for those listed optimization methods along the is first shown in Fig 20. The hybrid Jaya Runge-Kutta method gains results with the absolute errors almost zero, it can hardly be seen from the error bar graph.

Figure 19: The stream function for Homann axisymmetric stagnation flow and its derivatives corresponding to =1, =0.5
Jaya PSO Hyperband GA
0.1 0.52720714 0.52720713 0.52718683 0.52717143 0.52718263
0.2 0.82558612 0.82558615 0.82554498 0.82551377 0.82553647
0.3 0.95299636 0.95299639 0.9529321 0.95288335 0.9529188
0.4 0.99120424 0.99120421 0.9911132 0.99104412 0.99109436
0.5 0.998896 0.998895 0.99877527 0.99868369 0.9987503
0.6 0.99990906 0.99990904 0.99975742 0.99964238 0.99972605
0.7 0.999995 0.999994 0.99981215 0.99967344 0.99977432
0.8 0.99999962 0.9999996 0.99978552 0.9996231 0.99974123
0.9 0.99999975 0.99999979 0.9997544 0.99956826 0.99970363
1 0.99999972 0.99999979 0.99972311 0.99951327 0.99966588
Table 6: Comparison of proposed results with reference solution for Homann axisymmetric stagnation flow
Figure 20: Comparison of velocity of proposed results for Homann axisymmetric stagnation flow

6.4 Case4: Decelerating flows with

Finally, the decelerating flows are studied in this section. Also various values are selected. First, the convergence history graphs of the fitness function over 100 iterations for different parameters are presented from Figures 21 to 24. The fitness function value decreases fast over 100 iterations.

Figure 21: Convergence of fitness for and = -0.1
Figure 22: Convergence of fitness for and = -0.15
Figure 23: Convergence of fitness for and = -0.18
Figure 24: Convergence of fitness for and =-0.1988

Fig. 25 illustrates the velocity profiles with the different values, and show thats the velocity increases with an increase in value of , and the velocity profile approaches 1 with the grows. According to the results, the Runge-Kutta method combined with the Jaya algorithm has a good numerical performance to solve the Falkner-Skan equation.

Figure 25: The velocity profile corresponding to different when

For deceleration flows, we can observe that as the parameter decreases, the boundary layer thickness decreases, and eventually tends to one as the distance increases from the initial boundary.

6.4.1 Decelerating flows for =1 and =-0.15

To be more specific, the graph of stream function , its velocity and skin friction coefficient are solved by the hybrid Jaya Runge-Kutta method illustrated in Fig. 26. With the increasing of , approaches 0 and approaches 1, which agrees well with the boundary conditions in Equations 10 and 11. Moreover, the solutions with different methods (Jaya, PSO, Hyperband, and GA) are compared with the reference solution [ahmad2017stochastic] shown in Tables  7, the results obtained with Jaya method agrees pretty well with the reference solution. The absolution error for those listed optimization methods along the are presented Fig. 27. From the graph, the Jaya algorithm obtains the stable and accurate results compared with other methods.

Figure 26: The stream function for Decelerating flow and its derivatives corresponding to =1, =-0.5
Jaya PSO Hyperband GA
0.1 0.25214592 0.25214504 0.25214416 0.25215822 0.25212746
0.2 0.57886782 0.57886633 0.57886485 0.57888861 0.57883663
0.3 0.84962283 0.84962143 0.84962002 0.84964252 0.84959331
0.4 0.97107618 0.97107523 0.97107427 0.97108955 0.97105613
0.5 0.9972818 0.9972811 0.99728041 0.99729153 0.9972672
0.6 0.99987996 0.99987935 0.99987874 0.99988845 0.99986722
0.7 0.99999732 0.99999676 0.99999619 1.00000521 0.99998549
0.8 0.99999974 0.99999921 0.99999868 1.0000072 0.99998856
0.9 0.99999978 0.99999927 0.99999876 1.00000689 0.99998911
1 0.99999979 0.9999993 0.99999881 1.00000662 0.99998954
Table 7: Comparison of proposed results with reference solution for Decelerating flow
Figure 27: Comparison of velocity of proposed results for Decelerating flow

In summary, to show the ability of parameter identification with the Jaya optimizer, the , and results identified with different methods, which include the Runge-Kutta method combined with the Jaya algorithm, PSO algorithm, Hyperband algorithm, and GA, and the classical methods of Zhang [zhang2009iterative] and Asaithambi [asaithambi2005solution], are compared in Table 8. By camparing the results with different methods, we found that the results using the Runge-Kutta method combined with different algorithms have a good agreement with the results using classical methods, and the optimal results using the Runge-Kutta method combined with the Jaya algorithm are closer to the reference solutions than the other heuristic optimization methods. It can be concluded that the Runge-Kutta method combined with the Jaya algorithm is more suitable to solve Falkner-Skan equation. With the hybrid Jaya Runge-Kutta method, once the unknown parameters and are determined, we can easily compute the velocity profiles, skin friction coefficient etc.

Jaya algorithm Zhang[zhang2009iterative] Asaithambi[asaithambi2005solution] PSO Hyperband GA
Residual
0.5 0 0.332057 11.856964 2.73E-24 0.33205 0.33205 0.33204 0.33142 0.33215
2 1 1.311938 4.840246 1.26E-18 1.31194 1.31194 1.31185 1.31222 1.31199
1 2 1.687218 4.547123 2.18E-12 1.68721 1.68721 1.68772 1.68723 1.68688
1 1 1.232588 9.078257 5.44E-18 1.23258 1.23258 1.23254 1.23228 1.23257
1 0.5 0.927680 6.995320 9.64E-20 0.92768 0.92768 0.92764 0.92674 0.92797
1 0 0.469600 10.746206 6.64E-27 0.46960 0.46960 0.46957 0.47009 0.46973
1 -0.1 0.319270 8.181430 5.49E-21 0.31927 0.31927 0.31925 0.31815 0.31945
1 -0.15 0.216361 8.975579 3.75E-19 0.21636 0.21636 0.21636 0.21646 0.21377
1 -0.18 0.128636 11.999854 1.59E-45 0.12863 0.12863 0.12864 0.13208 0.12884
1 -0.1988 0.005218 11.999793 1.77E-41 0.00522 0.00522 0.00559 0.00509 0.00513
Table 8: Comparison of computed corresponding to different and

7 Conclusion

The hybrid Jaya Runge-Kutta method is presented in this paper to solve Falkner-Skan boundary value problem, which involves the identification of unknown parameters in partial differential equations. Further, this application also shows the ability of this hybrid method in solving coupled differential equations with prescribed boundary conditions. The original problems can be sensitive to the guess of initial values, with the help of Jaya algorithms, the whole scheme can yield stable and accurate results. The incompressible flow over a stretching/shrinking wedge with various wedge angles is examined. The Jaya algorithm can search the optimal parameters and by finding the minimal value of the fitness function according to the convergence history graph. Convergence and initial guess issues which trap those classical methods could be overcome through this simple but effective methodology.

Based on the results obtained by the hybrid Jaya Runge-Kutta method, it can be concluded that the present method can provide a reliable, effective, and accurate solution for the Falkner-Skan free boundary value problem, which includes Blasius equation, the Homann problem, the accelerating flows, and the decelerating flows. Jaya algorithm has also been verified to be effective in identification of those unknown parameters. In the future, this general of the method can be further applied into other multi-field coupled boundary layer flow problems and other time dependent partial differential equations.

References