1 Introduction
Solving nonlinear ordinary differential equations (ODEs) is one of the classical and practically important research areas in applied mathematics. In practice, such equations are mostly solved numerically or by approximate analytical methods since obtaining their explicit solution is usually very difficult or even impossible. One of the important approaches for solving a nonlinear ODE explicitly considers the existence of an invertible linearizing transformation of the variables and its construction. The reduction of a nonlinear ODE to a linear one makes its explicit integration much easier and often allows for obtaining an exact solution.
The linearization problem for a secondorder ODE
(1) 
was solved by Lie ([1], Sect. 1), who applied his general theory of integration of ODEs by means of a group of point transformations. He proved that is at most cubic in for a linearizable equation and derived the necessary and sufficient conditions of linearizability. These conditions have the form of two explicit and easily verifiable equalities (21) containing differential polynomials in the coefficients of as a polynomial in :
(2) 
Lie’s ideas and methods were extended and applied to thirdorder equations [2] and later to fourthorder equations [3]. In these contributions, all possible structures of the candidates for the linearization were found and the explicit form of necessary and sufficient linearizability conditions of the coefficients of those structures were derived. Therefore, given an ODE of second or third order, to check whether it is linearizable by a point transformation or not, it is sufficient to verify whether the relevant explicit linearizability conditions are satisfied or not. In practice, such a verification typically needs a computerbased symbolic algebraic computation for the simplification of the resulting expressions. An additional point to emphasize is that if the ODE contains parameters and/or arbitrary functions, then the linearizability conditions imply the algebraic and/or differential constraints on these parameters and/or functions that provide the linearization. Generally, however, these constraints may include the point transformation functions, and it may be highly conjectural to solve the constraints and to find a linearizing point transformation.
In the present paper we suggest two algorithmic linearization tests applicable to a quasilinear ODE (solved for the highest derivative) of any order greater or equal to two with a rational dependence on the other derivatives and the independent variable. The first linearization test is applicable to ODEs which do not contain parameters and arbitrary functions. This test is based on the construction of the Lie point symmetry algebra for the input ODE. The relevant mathematical methods are described in several textbooks (see, for example, [4]–[9]). To detect linearizability we compute the maximal abelian dimension of the Lie symmetry algebra and make use of the results of Mahomed and Leach [10]. Unlike the first test, our second test exploits the differential Thomas decomposition ([12]–[17]), an universal algorithmic tool for the algebraic analysis of polynomiallynonlinear systems of partial differential equations (PDEs), and allows not only for the detection of linearizability but also for the derivation of necessary and sufficient linearizability conditions for arbitrary functions or parameters occurring in these equations. An example of such a problem is given by Eq. (1) whose linearizability conditions are given by Eq. (2). Therefore, the second test can reproduce the above mentioned results of [1]–[3]. Besides, via the second linearization test one can generate differential equations for a linearizing point transformation and for the coefficients of the linearized equation that are suitable for finding the transformation and the coefficients. However, the first linearization test is computationally more efficient and it is therefore advisable to apply it first when considering higherorder equations, and then, in the case of linearizability, apply the second test in order to construct the linearizing point transformation and the reduced linear form of the ODE.
This paper is organized as follows. In Sect. 2 we briefly describe the mathematical objects we deal with before presenting our algorithms in Sect. 3. The implementation of these algorithms in Maple is then described in Sect. 4 and its application is illustrated in Sect. 5 using several examples. Finally, we provide a conclusion in Sect. 6.
2 Underlying Equations
In this paper we consider ODEs of the form
(3) 
with ^{1}^{1}1In the subsequent, everywhere where it is necessary from the computational point of view, the field is assumed to be considered instead of the field . solved with respect to the highest order derivative. As additional arguments, the function may also include parameters and/or arbitrary functions in and/or . Given an ODE of the form (3), our aim is to check the existence of an invertible transformation^{2}^{2}2Hereafter, all functions we deal with are assumed to be smooth.
(4) 
which maps (3) into a linear th order homogeneous equation
(5) 
The invertibility of (4) is provided by the inequation
(6) 
If such a transformation exists for , then it can always be chosen (cf. [6], Thm. 6.54; [9], Thm. 6.6.3) in a way that (5) takes the LaguerreForsyth normal form
(7) 
A firstorder ODE is always linearizable, but its linearization procedure is as hard as the integration of the equation (cf.[18], Ch. 2, Thm. 1). For any homogeneous linear equation
can be transformed by a substitution
to the simplest second order equation ([9], Thm. 3.3.1)
(8) 
One way to check the linearizability of Eq. (3) is to follow the classical approach by Lie [1] to study the symmetry properties of Eq. (3) under the infinitesimal transformation
(9) 
The invariance condition for Eq. (3) under the transformation (9) is given by the equality
(10) 
where the symmetry operator reads
(11) 
and is the total derivative operator with respect to .
The invariance condition (10) means that its lefthand side vanishes when Eq. (3) holds. Then the application of (11) to the lefthand side of Eq. (3) and the substitution of with in the resulting expression leads to the equality with the polynomial dependence of on the derivatives . Since, by Def. (9), the functions and do not depend on these derivatives, the equality holds if and only if all coefficients in are equal to zero. This leads to an overdetermined system of linear PDEs in and called determining system. Its solution yields a set of symmetry operators whose cardinality we denote by . This set forms a basis of the dimensional Lie symmetry algebra
(12) 
We denote the Lie symmetry algebra by and . Its derived algebra is a subalgebra that consists of all commutators of pairs of elements in .
Lie showed ([11], Ch. 12, p. 298, “Satz” 3) that the Lie point symmetry algebra of an order ODE has a dimension satisfying
Later, the interrelations between and were established that provide the linearizability of (3) by a point transformation (4) in the absence of parameters and arbitrary functions. Here we present the two theorems that describe such interrelations and form the basis of our first linearization test.
Theorem 1
Theorem 2
These theorems show that the verification of the third condition requires, in addition to the determination of , a computation to check the existence of an abelian Lie symmetry subalgebra of dimension . To our knowledge, there is only an algorithm described in the literature [21] for the computation of the maximal abelian dimension
, i.e. dimension of the maximal abelian subalgebra of a finitelydimensional Lie algebra given by its structure constants. The algorithm is reduced to solving the quadratically nonlinear system of multivariate polynomial equations providing vanishing of the Lie bracket between two arbitrary vectors in the Lie algebra. Clearly, the runtime of the algorithm is at least exponential in the dimension
of the algebra.Instead, to verify the third condition in Thm. 2 we devise a much more efficient algorithm. Our algorithm relies on the following statement which is a corollary to Thms. 1 and 2.
Corollary 1
The third condition is equivalent to

, and the derived algebra of (12) is abelian of dimension .
Under the third condition, since , Eq. (3) is linearizable by Thm. 1. Let Eq. (3) be linearizable. The symmetry Lie algebra of (5) is similar and hence isomorphic to that of (3) (cf. [4], Ch. 2, §7.9). It is easy to see that a linear th order equation (5) with variable coefficients admits the Lie point symmetry group
(13) 
where are constants (the group parameters) and are the fundamental solutions of (5). The Lie group (13) has the dimensional Lie algebra (cf. [8], Thm. 5.19)
(14) 
If a linear th order Eq. (5) has constant coefficients, then in addition to (14) the Lie point symmetry group (13) includes the translation and, hence, its Lie algebra, in addition to (14), has one more element:
(15) 
Furthermore, , and for all :
where are constants. Therefore, both Lie algebras (14) and (15) have abelian derived algebras of dimension . It is important to emphasize that can be algorithmically computed without solving the determining system what is generally impossible. It suffices to complete the last system to involution (for the theory of completion to involution we refer to [22]) and to construct power series solutions to the involutive system [19, 20]. For instance, as we do in our implementation (Sect. 4) of the algorithm LinearizationTest I described in Sect. 3.1, one can apply to the determining system the differential Thomas decomposition [16, 17] for a degreereverse lexicographical ranking and then compute the differential dimension polynomial [23] for the output Janet basis.
The differential Thomas decomposition was suggested in [12, 13] as a generalization of the RiquierJanet theory of passive linear and orthonomic PDE systems (see also [22] and the references therein) to polynomiallynonlinear systems of general form. The Thomas decomposition provides a universal algorithmic tool [16, 17] to study a differential system, which is defined as follows.
[12]–[17] A differential system is a system of differential equations and (possibly) inequations of the form
where is a positive integer as well as if , and are elements in the differential polynomial ring in finitely many differential indeterminates (dependent variables) over the differential field of characteristics zero.
The Thomas decomposition applied to a differential system yields a finite set of passive (involutive) and differentially triangular differential systems called simple (see [12][17]) that partition the solution set of the input differential system. Algebraically, this provides a characterizable decomposition [27] of the radical differential ideal where is the differential ideal generated by the polynomials in .
Unlike the LinearizationTest I where one can use, due to the linearity of determining systems, any procedure of completion to involution (e.g. the standard form algorithm [19]), our second algorithm LinearizationTest II (Sect. 3.2) is oriented to the Thomas decomposition.
To apply it, we need to formulate the conditions for the functions , in (4) and for the coefficients in (7) (if ) such that these conditions hold if and only if (3) is linearizable. In addition to the input differential system, the Thomas decomposition is determined by a ranking, that is, a linear ordering on the partial derivatives compatible with derivations ([12]–[17]) (in our case with and ).
By differentiating the equality , that follows from (4), times with respect to , we obtain the following equalities:
(16)  
Here is the Jacobian (6), are polynomials in their arguments whose coefficients are differential polynomials in and , for example, P_2(y^′)=(ψ_x+ψ_yy^′)(ϕ_xx+ϕ_xyy^′+ϕ_yy(y^′)^2) (ϕ_x+ϕ_yy^′)(ψ_xx+ψ_xyy^′+ψ_yy(y^′)^2) .
Now we replace the derivatives occurring in (7) (or the secondorder derivative in (8) if ) with the appropriate righthand sides in Eqs. (2) and solve the obtained equality with respect to (or ). As a result, we obtain the equality
(17) 
where is a polynomial in whose coefficients for are the differential polynomials not only in and but also in , the coefficients in Eq. (7).
Denote by and the numerator and denominator of the function in Eq. (3). Then, after elimination of from the equation system (3), (17) and clearing denominators in the rational functions of the obtained equality we obtain equation
(18) 
This equation is a polynomial in , and there are no constraints on these variables. Therefore, the equation holds if and only if all coefficients of the polynomial in the lefthand side vanish. This condition gives a partial differential equation system in , and . If the function in Eq. (3) depends on parameters and/or undetermined functions in , then Eq. (18) contains these parameters/functions.^{4}^{4}4One can always consider parameters as functions in and with zero derivatives.
Let be the set of equations obtained from Eq. (18) by equating the coefficients of the polynomial (in ) in the lefthand side to zero. If we enlarge with the set of equations
(19) 
The equation means that is a function of in accordance to (7). It is easy to see by differentiating the equality as follows:
Since we admit the invertible transformations (4) only, one has to add to the enlarged equation set the inequation
(20) 
where is the Jacobian (6).
Thereby, the main object of our construction and the statements on its relation to the linearization are given as follows.
The differential system (see Def. 2) made up of the above constructed PDE set and of the inequation set will be called linearizing differential system.
Theorem 3
Corollary 2
3 Linearization Tests
In this section we present our algorithms LinearizationTest I and LinearizationTest II. These algorithms, given an input equation (3), verify its linearizability by the point transformation (4). In so doing, the first test is applicable only to an ODE without parameters and undetermined functions in the variables and . The second algorithm admits a rational dependence of the function in Eq. (3) on such parameters and functions.
3.1 Linearization test I
Our first test, presented below, is based on the computation of the Lie symmetry algebra and its analysis. In line 2 we compute the determining system for (3). It is the straightforward procedure outlined in the preceding section and described in most textbooks on Lie symmetry analysis, in particular in [4]–[9]. As a routine, this procedure is present in most computer algebra packages specialized to such an analysis, for example, in the Maple packages DESOLV [24], DESOLVII [25], and SADE [26].
Since the determining system is linear, one can use any algorithm for its completion to involution in line 3, (cf. [19] and [22], Sect. 10.7). However, we prefer to use the differential Thomas algorithm here ([16], Sect. 3 and [17] Sect. 2.2).
The dimension of the Lie algebra (12) (line 4) is the dimension of the solution space of the determining system and can be computed in several ways (cf. [22], Sect. 8.2 and 9.3). Having computed the Janet involutive form of the determining system, it is easy to compute the dimension of its solution via an algorithmic construction of the differential dimension polynomial [23].
Algorithm: LinearizationTest I ()
* 0: , a nonlinear differential equation of form (3) 0: True, if is linearizable and , otherwise 1: ; 2: DeterminingSystem (); 3: InvolutiveDeterminingSystem (); 4: (LieSymmetryAlgebra) (); 5: if then 6: return True; 7: elif then 8: LieSymmetryAlgebra (); 9: DerivedAlgebra (); 10: if is abelian and then 11: return True; 12: fi 13: fi 14: return False;
We refer to [20] for the subalgorithm providing computation of the Lie symmetry algebra (line 8), i.e. for the computation of the structure constants in Eq. (12). The last subalgorithm DerivedAlgebra in line 9 does the straightforward computation of the derived algebra via the structure constants.
Correctness and termination. For the subalgorithms both these properties are either obvious (as for DerivedAlgebra) or shown in the papers we referred to in the description of the subalgorithms above. Therefore, the whole algorithm LinearizationTest I terminates, and its correctness is provided by Thms. 1 and 2, and Cor. 1.
3.2 Linearization test II
Our second test is based on the differential Thomas decomposition [16, 17]. It admits the rational dependence of Eq. (3) on a finite set of parameters (constants) and/or undetermined functions in . In the absence of parameters/functions the corresponding sets are inputted as the empty ones.
Algorithm: LinearizationTest II ()
* 0: , a nonlinear differential equation of form (3) of order ; P, a set of parameters; H, a set of undetermined functions in 0: Set of differential systems for functions and in (4) and (possibly) in elements of and if (3) is linearizable, and the empty set, otherwise 1: ; 2: ; 3: ; ; 4: ; Jacobian (6) 5: if then 6: ; ODE (8) 7: ; 8: else 9: ; ODE (7) 10: ; 11: fi 12: ; Eq. (17) 13: ; Eq. (18) 14: ; 15: ; 16: ; Eq. (19) 17: ; Ineq. (20) 18: ThomasDecomposition (); 19: return ;
In lines 3–17 of the algorithm LinearizationTest II the input linearizing differential system (Def. 2) is constructed for the Thomas decomposition computed in line 18. This construction is done in correspondence with the formulas (6)–(7), (8), and (17)–(20). Furthermore, if the output set of the Thomas decomposition is nonempty, then Eq. (3) is linearizable by Thm. 3. In this case the simple systems in the decomposition provide a partition of the solution space of the linearizing differential system and their solutions determine the invertible point transformation (4) and the coefficients of the linearized form (7) or (8). In addition, if there are parameters and/or undetermined functions in (3), then the output differential systems of the Thomas decomposition provide the compatibility conditions to these parameters/functions imposed by the linearization.
Correctness and termination are provided by those of the Thomas decomposition ([16], Sect. 3.4; [17], Thr. 2.2.57).
4 Implementation
We implemented both linearization tests in Maple. Our implementation runs on Version 16 and the subsequent ones.
First, we describe our implementation of the algorithm LinearizationTest I. Given an ordinary differential equation of the form (3), to generate the determining system, denoted by in line 3, we use the routine gendef of the Maple package DESOLV [24, 25]. Then, to complete the system to involution (line 3), we choose the orderly (“DegRevLex”) ranking (cf. [17], Def. A.3.2) such that
and apply the routine DifferentialThomasDecomposition of the package DifferentialThomas. This package is freely available [28]. To compute the dimension of the Lie symmetry algebra (line 4), we invoke the routine DifferentialSystemDimensionPolynomial. Since in our case the solution space of the determining system is finitely dimensional, the last routine outputs just the dimension of the solution space.
The subalgorithm LieSymmetryAlgebra of line 8 was implemented in Maple (see [20], Sect. 6). The implementation is based on the one of two other algorithms: the standard form algorithm for completion of the determining system to involution and on the algorithm of calculating power series solutions [19]. Since that implementation done in Maple V has not been adopted to the subsequent versions of Maple, we decided to make our own implementation of the algorithmic approach suggested in [20] to compute the structure constants in (12). Our implementation takes the Janet involutive form of the determining system outputted by the package DifferentialThomas and exploits its routine PowerSeriesSolution.
To construct the derived algebra (line 9) we invoke the routine DerivedAlgebra which is a part of the builtin package DifferentialGeometry:LieAlgebras.
In our implementation of LinearizationTest II we compute the expressions (2) to obtain the lefthand side in (18) (line 13) that is a polynomial in . Then equating of all coefficients in the polynomial to zero (line 14) and enlarging it with additional equations (lines 15 and 16) and the Jacobian inequation (line 17) yields the input linearizing differential system for the subroutine differentialThomasDecomposition (line 18). By default, we choose the orderly ranking on the partial derivatives of the functions and , and of those in the sets (line 10):
If the input ODE (3) contains (nonempty) sets of parameters and/or undetermined functions, then their rankings are less than those of in order to derive the compatibility conditions for parameters and functions.
5 Examples
In this section we demonstrate our algorithmic linearization tests using several examples. All timings given below were obtained with Maple 16 running on a desktop computer with an Intel(R)Xeon(R) X5680 CPU clocked at 3.33 GHz and 48 GB RAM.
[1] Consider the secondorder Eq. (1) in which is given by (2) with undetermined functions , . Algorithm LinearizationTest I is not applicable to this case, so we apply the algorithm LinearizationTest II with an orderly ranking such that
Then the routine DifferentialThomasDecomposition of the package DifferentialThomas [28] outputs three differential systems with disjoint solutions space in about 0.4 sec.:
Cor. 2 guarantees that there are linearizable equations among the equations in family (1)–(2). One of the output differential systems, namely , is a generic simple system (see [17], Def. 2.2.67). It has eight equations, and the last two of them that contain solely functions are the compatibility conditions for these functions whose solutions admit linearization. These conditions have the following form:
(21)  
These equations are exactly the linearizability conditions for (1)–(2) obtained by Lie in [1] (cf. [9], Thm. 6.5.2). The inequations in the system are
The two other differential systems and have the following inequations:
(22) 
Each of these systems has eight equations as . Every equation in as well as in is valid on all common solutions to the equations in (cf. [17], Cor. 2.2.66). In doing so,
(23) 
and hence each of (22) and (23) implies . Therefore, algorithm LinearizationTest II reproduces Lie’s classical results on the necessary and sufficient conditions for the linearization of the secondorder ODEs from family (1)–(2).
We consider the third order ODE
(24) 
This equation is linearizable by the generalized Sundman transformation^{5}^{5}5A kind of a nonlocal transformation, which is in general not a point transformation. [29]. Here we check its linearizability via the point transformation (4). Eq. (24) admits both our tests since it does not contain parameters and/or undetermined functions. Our implementation of algorithm LinearizationTest I returns false in 0.05 sec. and that of LinearizationTest II returns the empty set in 0.4 sec.
We consider the fourthorder ODE 2x^2y y^′′′′+x^2y^2+h(x,y) y^′y^′′′+ 16x y y^′′′+6x^2(y^′′)^2+ 48x y^′y^′′+24y y^′′+24(y^′)^2=0 , where is an undetermined function. To find all values of this function providing linearization, we again apply algorithm LinearizationTest II. The package DifferentialThomas for the orderly ranking satisfying
outputs in 3.3 sec. two differential systems and (see (26) and (5)). Each system has only one equation containing :
(25) 
The linearizability of (5) under condition was established in [3], and our computation shows that there are no other linearizable equations of family (5). Moreover, the simple systems and allow for the explicit construction of the linearizing point transformation (4) and the coefficients and in the LagerreForsyth form (7) of the image of (5) under mapping (4):
To show this, consider first the equations in :
(26) 
and its inequations
(27) 
The equation system (26) can easily be integrated by hand or using the Maple routine pdsolve. The general solution to (26), in addition to (25), reads
where are arbitrary constants and the subscript 1 represents the obtained solution to the differential system . Ineq. (27) imply and .
The second differential system is generic, and its set of equations is given by
(28)  
Test  Order of ODE (31)  

3  4  5  6  7  8  9  10  11  12  13  14  15  
I  0.20  0.61  1.27  2.54  4.18  6.49  10.20  23.21  39.79  63.38  91.54  119.42  150.13 
I  0.28  0.83  1.51  3.01  5.28  9.52  16.83  45.40  80.72  150.19  291.13  484.35  751.20 
II  0.65  2.33  13.28  80.76  376.40  1525.1  7512.9  OOM  OOM  OOM  OOM  OOM  OOM 
The set of inequations in consists three elements:
(29) 
The differential system (5) is also easily solvable, and its general solution reads
(30) 
Here is the above presented solution to (26) for , are arbitrary constants. The constraints that follow from (29) are those in , , and the additional inequation what rules out singularity in (30). The obtained explicit solutions to and form disjoined sets, since for a solution to and for that to . The disjointness of solution sets for the output simple systems is guaranteed by the Thomas decomposition algorithm ([12]–[17]). In the given case a solution to provides a mapping of (5) into the linear ODE
with constant coefficients, whereas a solution to maps (5) into an equation with variable coefficients
In [3], the simplest form of the linearizing transformation (4) was found:
which maps (5) and (25) into and corresponds to the solution of with
As a serial example, we consider
(31) 
Obviously, Eq. (31) becomes via transformation (4) of the form and . We use this example as a benchmark for a comparative experimental analysis of the time behavior of our algorithms when the order of the ODE grows. Additionally, we measure the CPU time for the algorithm LinearizationTest I whose subalgorithm DerivedAlgebra (in line 9) is replaced with the Maple implementation [21] for the detection of an dimensional abelian subalgebra of the Lie symmetry algebra (line 8). By Thm. 1, the existence of such a subalgebra yields the criterion of linearization. Table 1 presents the CPU times, where “OOM” is an acronym for runs “Out Of Memory”. The timings in the table correspond to LinearizationTest I (upper row), to its above described modification denoted by I (middle row), and to LinearizationTest II (bottom row). As one can see, our first test (I) is the fastest and the second test (II) is the slowest. However, the last one, unlike the other two, outputs much more information on the linearization. This fact was illustrated by Example 5.
6 Conclusions
For the first time, the problem of the linearization test for a wide class of ordinary differential equation of arbitrary order was algorithmically solved. In doing so, we have restricted ourselves to the quasilinear equations with a rational dependence on the other variables and to point transformations, and designed two algorithmic tests in order to check linearizability. The main benefits of these restrictions are (i) the algorithmic construction of the Lie symmetry algebra for the input equation and (ii) the reduction of the number of coefficients in the linearized equation due to the LagerreForsyth canonical form (7).
The benefit (i) allowed us to design an efficient algorithm LinearizationTest I which checks the linearizability of the equations. The second benefit (ii) provides the feasibility of the algorithm LinearizationTest II because of the overdeterminacy (cf. [22], Sect. 7.5) of a linearizing differential system. This overdeterminacy simplifies the consistency analysis of the linearizing system answering the same question as the first test.
Moreover, due to finitedimensionality of the solution space (cf. [6], Prop. 6.57) of a linearizing system, the Thomas decomposition algorithm outputs overdetermined subsystems, as those in Example 5. In practice, the overdeterminacy of the outputted simple systems of the Thomas decomposition of a linearizing differential system makes them easily solvable, much like the determining systems in the Lie symmetry analysis. Thereby, with the algorithm LinearizationTest II one can not only detect linearizability, but also find the linearizing transformation (4) and the coefficients in the linear form of the equation.
The Thomas decomposition for linearizing differential systems, even in the case of its inconsistency, may be time and space consuming, especially for higherorder ODEs. That is why, in practice, it is advisable to check the linearizability of the equation under consideration by the first algorithm before applying the second one. In the case when Eq. (3) contains parameters and/or arbitrary functions, there is no choice and one has to use the second algorithm.
The second algorithm may also improve the builtin Maple solver dsolve of differential equations. For example, dsolve applied to equation
(32) 
outputs its solution implicitly in the complicated form of double integrals including the Maple symbolic presentation RootOf for the roots of expressions. On the other hand Eq. (32) admits the linearization (cf. [9], Eqs. 6.6.57–6.6.59)
which is easily obtained by our algorithm LinearizationTest II and provides the explicit form of the solution to (32).
7 Acknowledgments
The authors are grateful to Daniel Robertz and Boris Dubrov for helpful discussions and to the anonymous reviewers for several insightful comments that led to a substantial improvement of the paper.
This work has been partially supported by the King Abdullah University of Science and Technology (KAUST baseline funding; D. A. Lyakhov and D. L. Michels), by the Russian Foundation for Basic Research (grant No.160100080; V. P. Gerdt), and by the Ministry of Education and Science of the Russian Federation (agreement 02.a03.21.0008; V. P. Gerdt).
References
 [1] S.Lie. Klassifikation und Integration von gewöhnlichen Differentialgleichungen zwischen x, y, die eine Gruppe von Transformationen gestatten. III. Archiv for Matematik og Naturvidenskab, 8(4), 1883, 371–458. Reprinted in Lie’s Gesammelte Abhandlungen, 5, paper XIY, 1924, 362–427.
 [2] N.H.Ibragimov, S.V.Meleshko. Linearization of thirdorder ordinary differential equations by point and contact transformations. J. Math. Anal. Appl., 308, 2005, 266–289.
 [3] N.H.Ibragimov, S.V.Meleshko, S.Suksern. Linearization of fourthorder ordinary differential equations by point transformations. J. Phys. A: Math. Theor. 41(235206), 2008, 19 pages.
 [4] L.V.Ovsyannikov. Group Analysis of Differential Equations. Academic Press, New York, 1992.
 [5] P.J.Olver. Applications of Lie Groups to Differential Equations, 2nd Edition. Graduate Texts in Mathematics, Vol. 107, SpringerVerlag, New York, 1993.
 [6] P.J.Olver. Equivalence, Invariance and Symmetry. Cambridge University Press, 1995.
 [7] G.W.Bluman, S.C.Anco. Symmetry and Integration Methods for Differential Equations. SpringerVerlag, New York, 2001.
 [8] F.Schwarz. Algorithmic Lie Theory for Solving ordinary Differential Equations. Chapman & HallCRS, Boca Raton, 2008.
 [9] N.H.Ibragimov. A Practical Course in Differential Equations and Mathematical Modelling. Classical and New Methods. Nonlinear Mathematical Models. Symmetry and Invariance Principles. Higher Education Press, World Scientific, New Jersey, 2009.
 [10] F.M.Mahomed, P.G.L.Leach. Symmetry Lie Algebra of th Order Ordinary Differential Equations. J. Math. Anal. Appl. Vol.151, 1990, 80–107.
 [11] S.Lie. Vorlesungen über kontinuierliche Gruppen mit geometrischen und anderen Anwendungen. Bearbeitet und herausgegeben von Dr. G. Schefferes. Leipzig, Teubner, 1883.
 [12] J.M.Thomas. Differential Systems. AMS Colloquium Publications, Vol.XXI, AMS, New York, 1937.
 [13] J.M.Thomas. Systems and Roots. William Byrd Press, Richmond, VA, 1962.
 [14] Z.Li, D.Wang. Coherent, Regular and Simple Systems in Zero Decompositions of Partial Differential Systems. System Science and Mathematical Sciences, 1999, 43–60.
 [15] V.P.Gerdt. On decomposition of algebraic PDE systems into simple subsystems. Acta Appl. Math., 101, 2008, 39–51.
 [16] T.Bächler, V.Gerdt, M.LangeHegermann, D.Robertz. Algorithmic Thomas decomposition of algebraic and differential systems. J. Symb. Comput., 47(10), 2012, 1233–1266.
 [17] D.Robertz. Formal Algorithmic Elimination for PDEs. Lect. Notes Math., Vol.2121, Springer, Cham, 2014.
 [18] V.I.Arnold. Ordinary Differential Equations. SpringerVerlag, Berlin, 1992.
 [19] G.Reid Algorithms for reducing a system of PDEs to standard form, determining the dimension of its solution space and calculating its Taylor series solution. Eur. J. Appl. Math. 2(4), 1991, 293–318.
 [20] G.Reid. Finding abstract Lie symmetry algebras of differential equations without integrating determining equations. Eur. J. Appl. Math. 2(4), 1991, 319–340.
 [21] M.Ceballos, J.Núñez, A.F.Tenorio. Algorithmic method to obtain abelian subalgebras and ideals in Lie algebras. Int. J. Comput. Math., Vol. 89, No. 10, 2012, 1388–1411.
 [22] W.M.Seiler. Involution: The Formal Theory of Differential Equations and its Applications in Computer Algebra. Algorithms and Computation in Mathematics 24. Springer, Heidelberg, 2010.
 [23] M.LangeHegermann. The Differential Dimension Polynomial for Characterizable Differential Ideals. arXiv:mathAC/1401.5959
 [24] J.Carminati, K.T.Vu. Symbolic Computation and Differential Equations: Lie Symmetries. J. Symb. Comput., 29, 2000, 95–116.
 [25] K.T.Vu, J.F.Jefersson, J.Carminati. Finding higher symmetries of differential equations using the MAPLE package DESOLVII. Comput. Phys. Commun., 183(4), 2012, 1044–1054.
 [26] T.M.R.Filho, A.Figueiredo. [SADE] a Maple package for the symmetry analysis of differential equations. Comput. Phys. Commun., 182(2), 2011, 467–476.
 [27] E.Hubert. Notes on triangular sets and triangulationdecomposition algorithms. II Differential systems. In: F.Winkler, U.Langer (Eds), Symbolic and Numerical Scientific Computation, Hagenberg, 2001, Lect. Not. Comput. Sci., Vol. 2630, Springer, Berlin, 2003, pp.40–87.
 [28] M.LangeHegermann. DifferentialThomas: Thomas decomposition of differential systems, http:// wwwb.math.rwthaachen.de/thomasdecomposition/.
 [29] N.Euler, T.Wolf, P.G.L.Leach, M.Euler. Linearizable ThirdOrder Ordinary Differential Equations and The Generalised Sundman Transformations: The Case . Acta Appl. Math., 76, 2003, 89–115.
2 Underlying Equations
In this paper we consider ODEs of the form
(3) 
with ^{1}^{1}1In the subsequent, everywhere where it is necessary from the computational point of view, the field is assumed to be considered instead of the field . solved with respect to the highest order derivative. As additional arguments, the function may also include parameters and/or arbitrary functions in and/or . Given an ODE of the form (3), our aim is to check the existence of an invertible transformation^{2}^{2}2Hereafter, all functions we deal with are assumed to be smooth.
(4) 
which maps (3) into a linear th order homogeneous equation
(5) 
The invertibility of (4) is provided by the inequation
(6) 
If such a transformation exists for , then it can always be chosen (cf. [6], Thm. 6.54; [9], Thm. 6.6.3) in a way that (5) takes the LaguerreForsyth normal form
(7) 
A firstorder ODE is always linearizable, but its linearization procedure is as hard as the integration of the equation (cf.[18], Ch. 2, Thm. 1). For any homogeneous linear equation
can be transformed by a substitution
to the simplest second order equation ([9], Thm. 3.3.1)
(8) 
One way to check the linearizability of Eq. (3) is to follow the classical approach by Lie [1] to study the symmetry properties of Eq. (3) under the infinitesimal transformation
(9) 
The invariance condition for Eq. (3) under the transformation (9) is given by the equality
(10) 
where the symmetry operator reads
(11) 
and is the total derivative operator with respect to .
The invariance condition (10) means that its lefthand side vanishes when Eq. (3) holds. Then the application of (11) to the lefthand side of Eq. (3) and the substitution of with in the resulting expression leads to the equality with the polynomial dependence of on the derivatives . Since, by Def. (9), the functions and do not depend on these derivatives, the equality holds if and only if all coefficients in are equal to zero. This leads to an overdetermined system of linear PDEs in and called determining system. Its solution yields a set of symmetry operators whose cardinality we denote by . This set forms a basis of the dimensional Lie symmetry algebra
(12) 
We denote the Lie symmetry algebra by and . Its derived algebra is a subalgebra that consists of all commutators of pairs of elements in .
Lie showed ([11], Ch. 12, p. 298, “Satz” 3) that the Lie point symmetry algebra of an order ODE has a dimension satisfying
Later, the interrelations between and were established that provide the linearizability of (3) by a point transformation (4) in the absence of parameters and arbitrary functions. Here we present the two theorems that describe such interrelations and form the basis of our first linearization test.
Comments
There are no comments yet.