1 Introduction
The boundary layer flow problems over a stretching or shrinking surface have significant applications in many industrial and technological fields, such as cooling of an infinite metallic plate in a cooling bath, an aerodynamic extrusion of plastic sheets, and extraction of polymer and rubber and glassfiber. The FalknerSkan problem is a notable similarity solution to the steady twodimensional laminar boundary layer equations, including the Blasius and stagnation point solutions.
Historically, the FalknerSkan equation was initially developed by Falkner and Skan in 1931 [falkneb1931lxxxv], and it plays an essential role in the fluid mechanics of boundary viscous flow. It is derived from the twodimensional incompressible NavierStokes equations. It exists in many forms with varying values of and , and its solution describes the forms on a wedge for the steady twodimensional laminar boundary layer. In the past years, there are many investigations for the FalknerSkan equation with numerical and analytical methods. The existence and uniqueness results were found by Rosenhead (1963), Weyl (1942), Hartman (1972), and Tam (1970) [duque2011numerical]. The shooting and invariant imbedding is the earliest computational method, which is proposed by Hartree, Smith, Cebeci and Keller, Na [asaithambi1998finite]. There are other numerical methods such as finiteelement method by Asaithambi [asaithambi2005solution], the modification of the classical Newtons method by Zhang and Chen, etc.[zhang2013particle].
Because of limitations in the accuracy and efficiency of those classic and numerical methods in solving FalknerSkan equations, the heuristic algorithms have been successfully applied to solve the nonlinear problems in engineering and applied science
[malik2015numerical, ullah2018evolutionary]. The heuristic algorithms are used to find a good solution effective and quickly, which are essentially the trial and error methods. Many researchers have devoted their attentions to study the heuristic algorithms and have thus developed a series of heuristic algorithms. The genetic algorithm(GA) is proposed by VakilBaghmisheh et al [vakil2008crack] and Liu and Jiao[liu2011application]. The particle swarm optimization(PSO) is found by Yu and Chen[yu2010bridge], Nanda,Maity,and Maiti[nanda2014crack]. Zare Hosseinzadeh et al. proposed cuckoo search (CS) algorithm [hosseinzadeh2014flexibility]. The evolutionary intelligence algorithms are proposed by Bagheri, Razeghi, and Ghodrati Amiri [bagheri2012detection]. Recently, Li et al. [li2017hyperband]developed a hyperband optimization method based on a novel banditbased approach. The evolutionary computing algorithms have been proved to be effective to solve PDEs. Tsoulos and Lagaris
[tsoulos2006solving], Saini et al. [saini2013genetic]et al. applied genetic algorithm for solving partial differential equations. Babaei
[babaei2013general], Yadav et al. [yadav2017efficient]applied particle swarm method for solving PDEs. Also, the deep neural networks based energy method and collocation method have been applied to solving partial differential equations
[guo2019deep, lin2020deep, nguyen2019deep, anitescu2019artificial, samaniego2020energy]. However, the major drawback of those previous soft computing techniques, as well as the metaheuristic, and evolutionary optimizations is the possibility of converging to solutions that may not be optimal but instead are trapped at a local optimal value [wadood2019application].Until recently, an advanced optimization algorithm called Jaya (a Sanskrit word meaning victory) was proposed by Venkata Rao in 2016 [rao2016Jaya], which is easy in implementation and does not need any algorithmspecific control parameters. The Jaya algorithm can solve both constrained and unconstrained optimization problems, and it is proved to converge to the global optimum values requiring less numbers of iterations and computational time [wadood2019application]. From the results we can prove that the Jaya algorithm is more effective, accurate and stable than other optimization algorithms.
In this work, we first describe the mathematic model for FalknerSkan problem and apply a transformation and order reduction to FalknerSkan equation to obtain a system of coupled differential equation with prescribed boundary conditions. Then the optimization algorithms involved in the application are brief presented. The Jaya method is integrated with the classical RungeKutta method and used to solve the FalknerSkan equation with various wedge angles. Also, as a comparison, the newly proposed hyperband method is also firstly applied to solve partial differential equations. The RungeKutta method combined with the Jaya algorithm is used to solve the FalknerSkan equation in following manner:
1. Using a coordinate transformation the semiinifinite domain of FalknerSkan equation can be changed into the finite unit interval.
2. Using the RungeKutta method the solving of the FalknerSkan equation can be converted into finding the initial values and .
3. Based on the fitness function , the optimization algorithm find the optimal values of and .
4. The optimal initial values of and are used for finding the solution of the FalknerSkan equation with the multistep method.
2 Mathematical models for FalknerSkan equation
In the case of the steady state boundary layer flow of an incompressible viscous fluid over a wedge, the equations of continuity in twodimensional motions and their boundary conditions [schlichting2016boundary] can be referred as: The boundary layer equation for the steady flow of an incompressible viscous fluid over a wedge in twodimensional is defined as:
(1) 
(2) 
where and are velocity components in and direction of the fluid flow, is the viscosity, the velocity at the edge of the boundary layer. Assuming the velocity of the ambient flow and moving wedge flow is a power law free stream velocity, , where is uniform free stream velocity, L is the length of wedge, is measured from the top of the wedge and m is the FalknerSkan powerlaw parameter.
The relevant boundary conditions are given by:
(3)  
The continuity Equation 1 is automatically satisfied by the stream function , thus obtaining:
(4) 
and from momentum Equation 2 yields
(5) 
Using the similarity transformation, two dimensionless variables can be yielded:
(6)  
, with a dimensionless stream function and
a dimensionless distance. Then the partial differential equation can be reduced to a third order nonlinear ordinary differential equation:
(7) 
The associated boundary conditions are given by:
(8)  
and
(9) 
where and are constants, is the angle of the wedge. is related to the FalknerSkan powerlaw parameter by .
The Geometry of FalknerSkan flow over a wedge is shown in Figure 1, which describes the potential flow and thin viscous boundary layer regions. The Falkner–Skan equation 7 governing this phenomenon describes a nonlinear, thirdorder, free boundary value problems, where defines the dimensionless velocity component indirection and the dimensionless shear stress in the boundary layer. Moreover, the solution of this equation is sought in a semiinfinite domain and one of the boundary conditions is asymptotically assigned on the first derivative at infinity. Thus, it is not easily to construct a closedform solution for this twopoint boundary value problem.
3 Transformation and order reduction
First of all, we replace the boundary condition at in Equation 9 with a free boundary condition as Asaithambi did in [asaithambi2005solution]:
(10) 
where is an unknown truncated boundary (a free boundary). The whole problem is then converted into a free boundary problem defined on a finite interval, where the “sufficiently large” is determined as part of the solution.
To ensure the impose of asymptotic boundary condition, additionally boundary conditions need to be added for the semiinfinite physical domain [asaithambi2005solution]:
(11) 
If a shooting algorithm is imposed to solve this free boundary problems governed by Equation 7 subjected to boundary conditions Equations 8, 10, and 11, an initial value condition is added at :
(12) 
It is obvious that different values of will lead to different values of as approaches infinity. The value is generally used to characterize solutions of Falkner–Skan equation with different and , and it will be used for model evaluation. It is well known that the shooting method can fail to converge for problems whose solutions are very sensitive to initial conditions. Many strategies have been proposed to tackle this problem [holsapple2004new]. A new, simple and straightforward approach has been proposed to the whole scheme, which deals with the unknown initial boundary conditions and the free boundary with a robust optimization method.
For simplicity, a coordinate transformation is first placed:
(13) 
which transforms the physical domain from to , and yields FalknerSkan equation:
(14) 
and the boundary conditions are changed correspondingly:
(15)  
In order to solve the FalknerSkan equation and remove the and from above equations, the reduction of order technique is applied:
(16)  
and the FalknerSkan equation becomes
(17) 
where and the initial conditions conditions are:
(18)  
the boundary conditions conditions are:
(19)  
4 The Optimization Algorithm
In this part, we will briefly introduce some classical heuristic algorithms and newly developed optimization algorithms, which will be used in the numerical example comparisons.
4.1 Pso
Inspired by social intelligent behaviors of the gregarious birds the Particle swarm optimization (PSO) is proposed by Kennedy and Eberhar t[kennedy1995particle]. PSO is a heuristic searching process. The search space of the optimization problem is similar to the flying space of birds, and the search process of the optimal solution can be described by the process of birds searching for food. In the process, the local optimum position and the global optimal position in the population are obtained, with the help of current optimal particles the velocity of each particle will be adjusted to search for the optimal solution. SPO is a classical swarm intelligence approach. SPO has many advantages, its calculation method is relatively simple, and the method needs fewer control parameters, it can obtain a accurate optimal solution. But PSO also has a problem, due to the lack of individual diversity, it is easy to fall into local optimization with the iteration of the search process.
Assuming that a swarm has P particles, the position vector and the velocity vector at t iteration are
and , the vector is updated by following equation:(20)  
where i=(1,2,…P) is the particle, and j=(1,2,…n) is the dimension, w is the inertia weight constant, which is a positive. The process of the PSO [le2019hybrid] is presented in Fig. 2.
4.2 Hyperband Algorithm
Hyperband is a SuccessiveHalving Algorithm. The process can be described as follow. In the search space, we sample hyperparameter sets randomly, after interactions the validation loss is evaluated, then we discard the lowest performers of the parameters and keep a half. in the next step, we use the good parameters to run for
interactions, and discard a half. The process will be repeated until we have only one model. It’s a variation of random search, and it can assign adaptive predefined parameters to a randomly sampled configuration automatically. In the process of optimization, Hyperband can find the best time allocation for each configuration. Compared with Bayesian Optimization methods on several hyperparameter optimization problems, the Hyperband can find the optimal solution with faster speed, the performance of the optimization is good, and it can parallelize the operating parameters. The process of Hyperband Algorithm
[kainz2019efficient] is presented in Table. 1.input : R, 
1 initialization: and ; 
2 for do 
3 , ; 
4 = gethyperparameterconfiguration(n); 
5 for do 
6 , ; 
7 ; 
8 ; 
9 end; 
10 end; 
11 output: configuration with lowest validation loss seen so far; 
4.3 Genetic Algorithm
Based on the genetic and evolutionary processes of nature the GA is proposed. With the fitness function, the GA selects the parameters by selection, crossover, and mutation. In the process, the parameters with a good solution are retained and the parameters with a bad solution are eliminated, then the new populations are inherited. The solution of the new generation is better than the solution of the previous generation. We repeat the process until the optimization criteria are satisfied, in the end we can find the optimal solution. The process of the Genetic Algorithm is shown in Fig. 3.
The Genetic Algorithm has some advantages, compared to the traditional methods it’s more efficient, it has good parallel capabilities, it can optimize both continuous and discrete functions and also multiobjective problems, and it is suitable for the problem with the large search space and a large number of parameters. There are also some disadvantages for the Genetic Algorithm, The calculation for some problems is expensive due to the fitness value is calculated repeatedly, and the calculation time is relatively long. The Genetic Algorithm is hard to guarantee the optimality or the quality of the solution.
4.4 Jaya Algorithm
Recently the Jaya algorithm becomes a popular optimization method, it can solve different types of optimization problems, for example, Rao used the Jaya algorithm to solve the constrained and unconstrained problems. The word Jaya comes from Sanskrit, which means victory. When a particular problem is solved using the Jaya algorithm, we can get the best result, and avoid the worst result. The Jaya algorithm is based on the populationbased method, in the process, it is used to modify individual solutions repeatedly, and then use the best individual solutions to modify the population solution, and in the end the best solution can be obtained. The Jaya algorithm is a gradientfree optimization algorithm. Compared with other algorithms, the Jaya algorithm isn’t limited by hyperparameters, and it only has two common parameters, that is, population size and the number of iterations. So Jaya algorithm is simple to find out the optimal solution and does not need to adjust any algorithmspecific parameters. The Jaya algorithm has been applied for many fields. such as B Chattopadhyay used the Jaya algorithm in the area of modern machining processes[chattopadhyay1996line], W Warid ad H Hizam used the Jaya algorithm to solve an optimal power flow solution [warid2016optimal]. The Jaya algorithm was used for the heat exchangers by Rao and R Venkata [rao2018multi].
The Jaya algorithm obtain the optimal values by minimizing the objective function, we suppose the objective function is defined, and it has ndimensional factors,
is the estimation value, i is the position of the candidate solution. The variables are updated using Eq. (
21).(21) 
where and are the best and worst solutions in the current population. and are random numbers in the range of [0,1], which are used as scaling factors. The scaling factors attract the best solution and improve the worst of the update in each iteration. In this entire procedure, the solution moves closer to the best result and moves away from the worst solution.
With the Jaya algorithm, the objective function value gradually approaches the optimal solution by updating the value of the variable. In the process, the fitness of each candidate solution in the population is improved. The process of the Jaya algorithm is presented in following flowchart.
The performance of Jaya algorithm is reflected by the minimum function estimation.
5 The hybrid JayaRungeKutta Algorithm
As is mentioned above, the shooting method can be sensitive to initial boundary conditions or unknown parameters in equations [butcher2016numerical], especially the guess of unknown initial values and in Equations from Equations 17 to 19. The advancement of heuristic algorithms, however, opens a new door to deal with this type of problem easily and accurately.
5.1 The Runge–Kutta method
The Runge–Kutta method is a popular numerical analysis method, which solves the differential equation using a onestep nonlinear approximation. With its simplicity and efficiency, the method is widely used for solving the initialvalue problems of differential equations. The Fourthorder RungeKutta method is shown in Fig.5, which is chosen as illustration. In each step, we evaluate the derivative four times, the first time is at the initial point, the second and third times are at trial midpoints, and the last time is at a trial endpoint , then the final function value is calculated with these derivatives, the final value is shown as a filled dot.
Compared with other methods, the Fourthorder RungeKutta method is simple and robust scheme.
5.2 The hybrid JayaRunge–Kutta method
Unlike the classical shooting method, we start with a parameter interview as an initial guess to find appropriate values of and . We first begin by defining the objection function for Jaya algorithm.
(23) 
Since and are the numerical approximate boundary values, the objective function for Jaya algorithm can be reduced to:
(24) 
Then the Jaya algorithm is deployed to minimize the objective function with 20 as the size of initial population and a 100 maximum numbers of iterations to algorithm as termination criterion.
is an objective function of , supposing that is the estimation value of th valable for th competitor solution, where . Therefore, is the position of th candidate solution. Let the best competitor solution gives the best estimation of in the present populace and the worst candidate solution obtains the worst value of in the present populace. Then the solution is modified based on the best and worst solution as:
(25) 
where is the update value of . With Jaya optimization technique, the achieved solution moves closer to the finest result and starts moving away from the worst solution.
In what follows, the pseudocode of the hybrid JayaRungeKutta algorithm is illustrated in Table. 2, the method is used to solve the proposed optimization problem.
Initialize 
randomly initialize , i=1,2,…,N(Population size) 
while Termination criterion is not met do 
calculate and by RungeKutta method 
evaluate the fitness function in Equation 24, i=1,2,…,N 
for i in range(N) do 
find and . 
for j in range(2) do 
update the variables by Eq.25 
next j(dimension of variables) 
end 
next i 
end 
next generation until termination criterion satisfied 
end while 
6 Numerical results
Next, the hybrid method is used to solve the transformed optimization problem, the Jaya algorithm is used to find the global minimum of the fitness function. For the best parameters and identified by Jaya algorithm, RungeKutta method is then applied to solve the coupled differential equations. In order to test the ability of Jaya algorithm, the parameters identified by optimization methods are compared with results from open literatures [zhang2009iterative, asaithambi2005solution]. The numerical results for the stream function , velocity profile , and skin friction coefficient are illustrated for specific flow problems. The velocity profile along the coordinates are compared with reference solutions from [ahmad2017stochastic], to show the accuracy of present method compared with other robust optimization methods.
For different coefficients and , the FalknerSkan problem can be divided into the following four categories:

Blasius equation:

Homann problem:

Accelerating flows:

Decelerating flows:
6.1 Case1: Blasius equation with =0.5 and =0
In this case, Falkner–Skan equation is known as Blasius equation. The solution is obtained using the RungeKutta method integrated with the Jaya algorithm, the PSO algoritm, the Hyperband algorithm, the GA and reference solution from [ahmad2017stochastic]. First, the convergence history graph of the fitness function over 100 iterations is presented in Figure 6. It is clearly the fitness function can reach its global minimum after some iterations, which proves the ability of Jaya algorithm as a global optimizer. Fig. 7 shows the graph of stream function , its velocity and skin friction coefficient, where the velocity profile goes asymptotically to 1 and the skin friction coefficient goes asymptotically to 0, which are exactly the cases for Equations 9 and 11. From the Table. 3, the solutions from different optimization methods including Jaya, PSO, Hyperband and GA are compared with the reference solution. It is clearly that our method agrees very well with the reference solution. To be more clear, the absolution error for those listed optimization methods along the are presented Fig. 8. It can be observed that the Jaya algorithm obtains the most stable and accurate results.
Jaya  PSO  Hyperband  GA  

0.1  0.3892354  0.38924351  0.38922149  0.38925129  0.38922613 
0.2  0.72239923  0.72241324  0.72237523  0.72235928  0.72238323 
0.3  0.91901667  0.91903214  0.91899015  0.91903766  0.91899899 
0.4  0.98646615  0.9864808  0.98644101  0.98647555  0.98644939 
0.5  0.99877239  0.99878654  0.99874812  0.99877457  0.99875621 
0.6  0.9999285  0.99994256  0.9999044  1.00002937  0.99991243 
0.7  0.99998388  0.99999793  0.99995978  1.00008464  0.99996781 
0.8  0.99998522  0.99999928  0.99996113  1.00008598  0.99996916 
0.9  0.99998524  0.99999929  0.99996115  1.000086  0.99996918 
1  0.99998524  0.99999929  0.99996115  1.000086  0.99996918 
6.2 Case2: Homann flow with
In this section, the Falkner–Skan problem is considered as Homann steady flow. Also, the solution is obtained with the help of Jaya, PSO, Hyperband, GA integrated RungeKutta method. First, the convergence history graph of the fitness function over 100 iterations is presented in Figure 9. Also, the value of fitness function drops down significantly. Fig. 10 shows the graph of stream function , its velocity and skin friction coefficient gained with Jaya optimization method. From Table. 5, the solutions from different optimization methods including Jaya, PSO, Hyperband and GA are compared with the reference solution [ahmad2017stochastic]. Still our method obtains the most agreeable results. Moreover, the absolution errors for those listed optimization methods along the are presented Fig. 18. It can be concluded that the Jaya algorithm obtains the most stable and accurate results and far outweighs the other optimization method.
Jaya  PSO  Hyperband  GA  

0.1  0.51841208  0.51841111  0.5183927  0.51837284  0.51842661 
0.2  0.81685809  0.81685613  0.81681886  0.81677866  0.81688751 
0.3  0.9483164  0.94831334  0.94825528  0.94819263  0.94836223 
0.4  0.98969146  0.98968714  0.98960504  0.98951645  0.98975628 
0.5  0.99859956  0.99859383  0.998485  0.99836758  0.99868548 
0.6  0.99988001  0.99987281  0.99973603  0.99958845  0.999988 
0.7  1.00000237  0.99999368  0.99982862  0.99965052  1.00013269 
0.8  1.00001151  1.00000133  0.99980793  0.99979926  1.00016419 
0.9  1.00001353  1.00000186  0.99978012  0.99964087  1.00018858 
1  1.00001527  1.00000211  0.99975203  0.9996822  1.00021269 
6.3 Case3: Accelerating flows with
The accelerating flows are studied in this section. Different values are selected. First, the convergence history graph of the fitness function over 100 iterations for different parameters is presented from Figures 12 to 15. Also, the value of fitness function can reach a very low level at the end of the iterations.
Fig. 16 shows the velocity profile for those selected cases gained with Jaya optimization method. With the increasing of , it is observed that the horizontal velocity profiles go asymptotically to 1, which verifies the asymptotical boundary condition shown in Equation 9. In addition, for acceleration flows, we can observe that as the parameter increases, the boundary layer thickness increases, and eventually tends to one as the distance increases from the initial boundary.
6.3.1 Hiemenz flow problem for =1 and =1
To be more specific, two classic cases Hiemenz flow and Homann axisymmetric stagnation flow, are in comparison with Jaya, PSO, Hyperband, GA, and the reference solution [ahmad2017stochastic]. In Tables 5, the results obtained with Jaya method agrees pretty well with the reference solution [ahmad2017stochastic]. To be more clear, the absolution error for those listed optimization methods along the are presented Fig. 18. For this case, though Jaya optimizer is not the best, it still gives very accurate results at most points.
Jaya  PSO  Hyperband  GA  

0.1  0.73864188  0.73864962  0.73861094  0.73855389  0.73863705 
0.2  0.9583553  0.95837589  0.95827293  0.95812107  0.95834243 
0.3  0.99623432  0.99627939  0.99605404  0.99572166  0.99620615 
0.4  0.99975182  0.99983548  0.99941718  0.99880024  0.99969953 
0.5  0.99987682  1.00001317  0.99933143  0.99892602  0.9997916 
0.6  0.99982269  1.00002568  0.9990108  0.99901422  0.99969582 
0.7  0.99975239  1.00003594  0.99911834  0.99912817  0.99957517 
0.8  0.99966988  1.00004792  0.99962806  0.99937203  0.99943362 
0.9  0.99957522  1.00006166  0.99973003  0.99944605  0.99927122 
1  0.9994684  1.00007717  0.99983429  0.99995043  0.99908797 
6.3.2 Homann axisymmetric stagnation flow for =1 and =0.5
For the Homann axisymmetric stagnation flow, the stream function , its velocity and skin friction coefficient gained with Jaya optimization method are first shown in Figure 19. Also, it can be observed that with the increase of , the velocity profile goes asymptotically to 1, which verifies the asymptotical boundary condition shown in Equation 9 and the skin friction coefficient goes asymptotically to 0, which is just the case for Equation 11. In detail, the velocity profiles gained by all optimization methods are compared with reference solution [ahmad2017stochastic] shown in Table 7. The results obtained by Jaya method are in excellent agreement with the reference solution. For better comparison, the absolution error for those listed optimization methods along the is first shown in Fig 20. The hybrid Jaya RungeKutta method gains results with the absolute errors almost zero, it can hardly be seen from the error bar graph.
Jaya  PSO  Hyperband  GA  

0.1  0.52720714  0.52720713  0.52718683  0.52717143  0.52718263 
0.2  0.82558612  0.82558615  0.82554498  0.82551377  0.82553647 
0.3  0.95299636  0.95299639  0.9529321  0.95288335  0.9529188 
0.4  0.99120424  0.99120421  0.9911132  0.99104412  0.99109436 
0.5  0.998896  0.998895  0.99877527  0.99868369  0.9987503 
0.6  0.99990906  0.99990904  0.99975742  0.99964238  0.99972605 
0.7  0.999995  0.999994  0.99981215  0.99967344  0.99977432 
0.8  0.99999962  0.9999996  0.99978552  0.9996231  0.99974123 
0.9  0.99999975  0.99999979  0.9997544  0.99956826  0.99970363 
1  0.99999972  0.99999979  0.99972311  0.99951327  0.99966588 
6.4 Case4: Decelerating flows with
Finally, the decelerating flows are studied in this section. Also various values are selected. First, the convergence history graphs of the fitness function over 100 iterations for different parameters are presented from Figures 21 to 24. The fitness function value decreases fast over 100 iterations.
Fig. 25 illustrates the velocity profiles with the different values, and show thats the velocity increases with an increase in value of , and the velocity profile approaches 1 with the grows. According to the results, the RungeKutta method combined with the Jaya algorithm has a good numerical performance to solve the FalknerSkan equation.
For deceleration flows, we can observe that as the parameter decreases, the boundary layer thickness decreases, and eventually tends to one as the distance increases from the initial boundary.
6.4.1 Decelerating flows for =1 and =0.15
To be more specific, the graph of stream function , its velocity and skin friction coefficient are solved by the hybrid Jaya RungeKutta method illustrated in Fig. 26. With the increasing of , approaches 0 and approaches 1, which agrees well with the boundary conditions in Equations 10 and 11. Moreover, the solutions with different methods (Jaya, PSO, Hyperband, and GA) are compared with the reference solution [ahmad2017stochastic] shown in Tables 7, the results obtained with Jaya method agrees pretty well with the reference solution. The absolution error for those listed optimization methods along the are presented Fig. 27. From the graph, the Jaya algorithm obtains the stable and accurate results compared with other methods.
Jaya  PSO  Hyperband  GA  

0.1  0.25214592  0.25214504  0.25214416  0.25215822  0.25212746 
0.2  0.57886782  0.57886633  0.57886485  0.57888861  0.57883663 
0.3  0.84962283  0.84962143  0.84962002  0.84964252  0.84959331 
0.4  0.97107618  0.97107523  0.97107427  0.97108955  0.97105613 
0.5  0.9972818  0.9972811  0.99728041  0.99729153  0.9972672 
0.6  0.99987996  0.99987935  0.99987874  0.99988845  0.99986722 
0.7  0.99999732  0.99999676  0.99999619  1.00000521  0.99998549 
0.8  0.99999974  0.99999921  0.99999868  1.0000072  0.99998856 
0.9  0.99999978  0.99999927  0.99999876  1.00000689  0.99998911 
1  0.99999979  0.9999993  0.99999881  1.00000662  0.99998954 
In summary, to show the ability of parameter identification with the Jaya optimizer, the , and results identified with different methods, which include the RungeKutta method combined with the Jaya algorithm, PSO algorithm, Hyperband algorithm, and GA, and the classical methods of Zhang [zhang2009iterative] and Asaithambi [asaithambi2005solution], are compared in Table 8. By camparing the results with different methods, we found that the results using the RungeKutta method combined with different algorithms have a good agreement with the results using classical methods, and the optimal results using the RungeKutta method combined with the Jaya algorithm are closer to the reference solutions than the other heuristic optimization methods. It can be concluded that the RungeKutta method combined with the Jaya algorithm is more suitable to solve FalknerSkan equation. With the hybrid Jaya RungeKutta method, once the unknown parameters and are determined, we can easily compute the velocity profiles, skin friction coefficient etc.
Jaya algorithm  Zhang[zhang2009iterative]  Asaithambi[asaithambi2005solution]  PSO  Hyperband  GA  

Residual  
0.5  0  0.332057  11.856964  2.73E24  0.33205  0.33205  0.33204  0.33142  0.33215 
2  1  1.311938  4.840246  1.26E18  1.31194  1.31194  1.31185  1.31222  1.31199 
1  2  1.687218  4.547123  2.18E12  1.68721  1.68721  1.68772  1.68723  1.68688 
1  1  1.232588  9.078257  5.44E18  1.23258  1.23258  1.23254  1.23228  1.23257 
1  0.5  0.927680  6.995320  9.64E20  0.92768  0.92768  0.92764  0.92674  0.92797 
1  0  0.469600  10.746206  6.64E27  0.46960  0.46960  0.46957  0.47009  0.46973 
1  0.1  0.319270  8.181430  5.49E21  0.31927  0.31927  0.31925  0.31815  0.31945 
1  0.15  0.216361  8.975579  3.75E19  0.21636  0.21636  0.21636  0.21646  0.21377 
1  0.18  0.128636  11.999854  1.59E45  0.12863  0.12863  0.12864  0.13208  0.12884 
1  0.1988  0.005218  11.999793  1.77E41  0.00522  0.00522  0.00559  0.00509  0.00513 
7 Conclusion
The hybrid Jaya RungeKutta method is presented in this paper to solve FalknerSkan boundary value problem, which involves the identification of unknown parameters in partial differential equations. Further, this application also shows the ability of this hybrid method in solving coupled differential equations with prescribed boundary conditions. The original problems can be sensitive to the guess of initial values, with the help of Jaya algorithms, the whole scheme can yield stable and accurate results. The incompressible flow over a stretching/shrinking wedge with various wedge angles is examined. The Jaya algorithm can search the optimal parameters and by finding the minimal value of the fitness function according to the convergence history graph. Convergence and initial guess issues which trap those classical methods could be overcome through this simple but effective methodology.
Based on the results obtained by the hybrid Jaya RungeKutta method, it can be concluded that the present method can provide a reliable, effective, and accurate solution for the FalknerSkan free boundary value problem, which includes Blasius equation, the Homann problem, the accelerating flows, and the decelerating flows. Jaya algorithm has also been verified to be effective in identification of those unknown parameters. In the future, this general of the method can be further applied into other multifield coupled boundary layer flow problems and other time dependent partial differential equations.