1. Introduction
The Schrödinger equation is used in many field of Physics. In dimensionless units, it takes the form
(1) 
where is a function of the spatial coordinates and of the time , is the Laplace operator, and the potential is generally a function of space and time and, possibly, a functional of . In this paper, we focus more specifically on the nonlinear Schrödinger equation ( proportional to ) and on the Schrödinger–Newton (or Schrödinger–Poisson) equation in which the Laplacian of the potential is proportional to . These special cases were chosen for clarity and because they are of practical interest, but the method presented here can be extended to more general potentials (and equations).
With the nonlinear Schrödinger equation (NLS) considered here, the (local nonlinear) potential is
(2) 
where is a coupling constant. For the interaction is repulsive, while, for it is attractive. The NLS describes various physical phenomena, such as Bose–Einstein condensates [1], laser beams in some nonlinear media [2], water wave packets [3], etc.
With the Schrödinger–Poisson equation, the potential is given by the Poisson equation
(3) 
where is another coupling constant, the interaction being attractive if and repulsive if . It is therefore nonlinear and nonlocal, giving rise to collective phenomena [4], appearing for instance in optics [5, 6, 7], Bose–Einstein condensates [8], cosmology [9, 10, 11] and theories describing the quantum collapse of the wave function [12, 13]. It is also used as a model to perform cosmological simulations in the semiclassical limit [14].
The above equations cannot be solved analytically (except for very special cases) and numerical methods must be employed. In this paper, we focus on spectral methods for the spatial resolution, i.e., methods that are based on fast Fourier transform (FFT) techniques, that are specially efficient and accurate
[15]. For the temporal resolution, two families of methods are commonly employed to solve Schrödingerlike equations: integrating factors [16] and splitstep integrators [17]. The latter methods have been used to integrate both the SN and NLS equations, but the former is used essentially to solve the NLS, with very performing results [18, 19]. In this note, we focus on the former technique, which consists in integrating analytically the linear part of the equation and integrating numerically the remaining nonlinear part with a classical method [16]. The principle of the method is described as follow.Writing the Schrödinger equation in the generic form
(4) 
the righthand side is split into linear and nonlinear parts
(5) 
where is an easily computable autonomous linear operator and is the remaining (usually) nonlinear part. At the th timestep, with , considering the change of dependent variable
(6) 
so at , the equation (5) is rewritten
(7) 
The operator being well chosen, the stiffness of (5) is considerably reduced and the equation (7) is (hopefully) well approximated by algebraic polynomials for . Thus, standard timestepping methods, such as an adaptive Runge–Kutta method [20, 21], can be used to efficiently solve (7
). To do so, the solution is evaluated at two different orders and a local error is estimated as the difference between those quantities. Popular integrators can be found in
[20, 22].It is straightforward to apply this strategy to the Schrödinger equation (1) since and the potential are, respectively, linear and nonlinear operators of . By switching to Fourier space in position, the equation becomes
(8) 
where hats denote the Fourier transform of the underneath quantity and (
the wave vector). The equation is now in a form where the application of the integrating factor (IF) method is straightforward, i.e., (
8) becomes(9) 
where . If the nonlinear part of the equation is zero, then and any (reasonable) temporal scheme will produce the exact solution . In other words, the integrating factor technique is exact for linear equations. This indicates that the numerical errors depend on the magnitude of the nonlinear part. Therefore, in order to minimise these errors, a strategy consists in minimizing the magnitude of at each timestep. To do so, we exploit the gauge invariance of the Schrödinger equation: if is a solution of (1) at a given time , then is a solution of
(10) 
as one can easily verify. Thus, at each timestep, adding a constant to in (1), modifies the solution as
(11) 
where is the th timestep. Of course, at the end of the computations, the operation (11) can be easily reverted if the original phase is relevant. Using this procedure, we observed up to a fivefold speed increase (the overall computational time is divided by about five) compared to taking . Of course, the speedup varies depending on the initial condition, of the (spatial and temporal) numerical schemes and on the choice of gauge corresponding to .
In this paper, we derive some analytic formulas giving an optimal in order to maximise the timestep, i.e., to minimize the overall computational time of the numerical resolution. We emphasize that the thus obtained optimal values of do not affect the accuracy of the numerical solution, leaving it unchanged with respect to the case. Two strategies are presented. In section 2, a first ‘natural’ approach to derive a suitable is based on the analytical structure of the equation and it is independent of the numerical algorithm employed for its resolution. More precisely, is obtained minimizing a norm of the righthand side of the equation (5). This provides an easy manner to obtain a formula that is moreover computationally ‘cheap’. This expression is however only near optimal, so a ‘better’ expression is subsequently derived. Considering both the equations and the numerical algorithms, a second optimal expression for is derived in section 3. This approach consists in minimizing exactly the numerical error and thus explicitly dependents on the numerical scheme. This provides a more accurate, but computationally expensive, solution. The advances of these special choices are illustrated numerically in section 4. Finally, a summary and perspectives are drawn in section 5.
2. Near optimal
As mentioned above, if properly chosen, the integrating factor is able to reduce the stiffness of the equation, making the numerical integration more efficient. In addition, the magnitude of the nonlinear part of (7) also contributes to the efficiency of the numerical integration. Specifically, if is zero, and the integrating factor technique is exact. Thus, the efficiency of the algorithm is expected to increase as the magnitude of gets smaller, and subsequently the overall computational time should be reduced. Here, we show how to choose the arbitrary constant in order to reduce the magnitude of the nonlinear part . In the case of the Schrödinger equation, we have
(12) 
where denotes the Fourier transform and .
A natural strategy is to minimize the norm, namely
(13) 
where is the number of spatial modes, square brackets denote the integer part and is the th Fourier mode. The explicit expression of can be found exploiting the definition of the discrete Fourier transform. For simplicity, we do the calculations in one dimension (1D) without loss of generality, since the final result is independent of the spatial dimension . From Parseval theorem, one obtains
(14) 
where and at time . Since the function is a secondorder polynomial in , it admits an unique minimum, which is obtained from the equation , yielding
(15) 
Therefore, at each time step , , which is the value of minimizing the norm of , is obtained from (15). We show below that even though this approach is not unique (i.e., different norms could be considered), the provided solution is quite advantageous compared to others, being computationally cheap and independent on the order of the numerical scheme.
3. Optimal
We show here another way to choose the arbitrary constant in order to improve the algorithm efficiency and reduce the overall computational time. This approach is based on the principles of the adaptive timestep procedure, where at each time step , an error between two approximated solutions of different orders is estimated. Since the smaller this quantity the larger the timestep, minimizing allows to choose a larger timestep, speedingup the numerical integration and keeping roughly the same numerical error. More specifically, the error depends on the arbitrary constant , hence the minimization can be performed (see below) choosing an appropriate . In this section, we first recall the method for determining the size of the time step used in the Runge–Kutta procedures; interested readers should refer to [20] for further details. Although the determination of can be formally presented for any embedded Runge–Kutta schemes, this results in very cumbersome calculations with little insights. Thus, for brevity and clarity, we illustrate the method with the Heun method (that is a secondorder Runge–Kutta method with an embedded firstorder explicit Euler scheme for the time stepping [23]). We then sketchout how this procedure can be implemented for generic embedded Runge–Kutta methods.
3.1. Principle of the adaptive timestep procedure
For the time stepping, embedded Runge–Kutta methods estimate the quadrature error comparing the results of two orders of the time integrator [20]. For a solver of order with an embedded order scheme (hereafter schemes of orders ), at the th time step, the error is [24]
(16) 
where is the number of spatial modes, square brackets denote the integer part, is the th order solution at the th Fourier mode, the tilde notation indicating the solution at order , and Tol is the tolerance (parameter defining the desired precision of the timeintegration). The time step is accepted if the error is smaller than the tolerance Tol, otherwise is reduced and this step is recomputed. being accepted, the next time step is obtained assuming the largest error equal to the tolerance. In order to avoid an excess of rejected time steps, we use the Proportional Integral (PI) Step Control [24], which chooses the optimal time step as
(17) 
where , , being the order of the chosen integrator [25]. Interested readers should refer to [20] for details on this classical procedure.
3.2. Optimum time step
Since the constant can be chosen freely, we seek for the value of providing the largest , namely, to maximize the righthand side of (17). Since and are determined at the previous timestep, only in (17) depends on . Thus, in order to maximize , must be minimized, i.e., one must solve . This derivation being characterized by cumbersome algebra for general embedded Runge–Kutta schemes, we illustrate the case of the Heun algorithm (that is a secondorder Runge–Kutta method with an embedded firstorder explicit Euler scheme for the time stepping [23]), the principle being the same for higher order integrators. Also for simplicity, we give the calculations in one dimension (1D) without loss of generality, since the final result is independent of the spatial dimension.
3.2.1. Optimum for Heun’s method
Heun’s method consists, here, in solving the initial value problem (for )
(18) 
and
(19) 
Hereafter, for brevity, we denote
(20) 
At time , the first and secondorder (in ) approximations of , respectively and , are
(21)  
(22) 
The next timestep is chosen using equation (17). For our equation, the difference between the first and secondorder approximations is such that
(23) 
where . We note that the absolute value in (3.2.1) is of firstorder in , as one can easily check with a Taylor expansion around , so . More precisely, after some elementary algebra, one finds
(24)  
which, defining
(25) 
can be rewritten as
(26) 
Introducing the mean quadratic error
(27) 
substituting (26) into (27) and exploiting the definition of the discrete Fourier transform, one obtains (using Parseval theorem)
(28) 
The minimum of , obtained from the equation , is such that
(29) 
Therefore, the optimum providing the largest , in the case of Heun’s method, is a solution of (29).
3.2.2. Optimum for generic embedded Runge–Kutta schemes
The optimum for general embedded Runge–Kutta schemes can be obtained following the same principles illustrated above with the Heun algorithm. However, the algebraic calculations get rapidly very cumbersome, leading to expensive computations that, in most cases, exceeds the time gained with a larger step. Here, we sketchout the procedure for generic embedded Runge–Kutta methods, considering solvers of order with an embedded order scheme (for other embedded or extrapolation methods, the procedure is completely analogue). For a stage method, the error can be written as [24]
(30) 
where , and are the coefficients of the Butcher tableau which characterizes the integrator [24]. Using Taylor expansions and unnesting the scheme, it is possible to prove that a result with a similar structure compared with (26) is obtained. In this case, the number of stages appears as exponent in the function , which takes the form . The function , on the other hand, becomes explicitly dependent on , involving a number of terms growing exponentially with . For this reason, even though the exact result can always be achieved, the computational time needed to minimize the error (16) is often larger than the time gained with a larger step, especially for higher order schemes (). In the next section, we show how, for practical applications, an exact solution is not necessary to improve the algorithm, and (15) represents a fast and accurate method.
4. Numerical examples
Here, we consider numerical examples where we apply this method, focusing on both the SN and NLS equations solved with the Dormand and Prince 5(4) integrator [22] in one and two spatial dimensions. In all cases, we set open boundary conditions for the potential, while the initial conditions and the value of the physical parameters are chosen to be very close to regimes of physical interest, as described in [1, 2, 3]. Details on the numerical simulations can be found in A. The gain factor provided by the method depends on the optimal value of compared to the case, which changes from case to case as a function of the boundary conditions for the potential and of the profile of the solution. Specifically, since the gain factor is evaluated with respect to the case, the more the optimal value of is far from zero, the larger the gain factor gets.
For the onedimensional NLS, some analytical stationary solutions are known. We then use one of these solutions (see A) as initial condition. For all other cases (SN and NLS 2D), no such stationary solutions are known, so we use gaussian initial conditions.
In Figure 1, we show the average timestep , for an entire simulation with time steps, as a function of for the onedimensional SN and NLS equations. These plots are generated taking constant for the entire simulations, in order to better appreciate the strong dependence of the timestep on the choice of the gauge for the potential. In Figure 2, we report the result of simulations performed choosing the near optimal at each timestep. Note that, for the onedimensional NLS, the solution being stationary, and the optimum do not change in time, that is not the case in 2D. We show the timestep as a function of time for the onedimensional SN and NLS equations, comparing the case with . In both cases, the timestep chosen by the algorithm with the optimization of the gauge constant proves to be larger, compared to the case. In Table 1, we show the number of timeloops required to run each simulation and the time needed to run the simulation (in seconds) for the cases and . For NLS, in the one dimensional case we achieve roughly a improvement in terms of speed gain between the and cases, while in two dimensions the speed gain is only approximately since, here, the value of is very close to zero. As long as the equation is concerned, (15) proved to reduce remarkably both the number of time loops and the effective time for the total simulation, providing up to a factor of improvement with respect to the case in 1D and up to a factor in 2D.
Eq.  

5. Conclusion
Exploiting a gauge condition on the potential, we optimized the integrating factor technique applied to the nonlinear Schrödinger and Schrödinger–Newton equations. Although the exact values of the piecewise constant minimizing the error (16) (therefore maximizing the timestep) is in principle always possible to compute (e.g., with a computer algebra system), its expression depends on the particular numerical scheme chosen and it becomes complicated as the order of the method increases, resulting in a high computational cost. However, the nearoptimal value obtained from the first approach we described, based on the minimization of the norm of the nonlinear part of the equation, proved to be an accurate and efficient solution in the tested cases. Thus, being computationally extremely cheap and independent of the particular numerical scheme employed, this is the approach one should choose for most simulations, at least when the computation of is not very expensive. For Schrödingerlike equations with hard to compute potentials, most of the computational time is spent in the calculation of . For these very demanding equations, the extra cost needed to compute the optimum (instead of the near optimum ) is negligible in comparison, so could be preferable.
For the cases tested here, we found a speedup in the computation time up to a factor , the speedup depending on the equation and on the physical regime. These examples show that this approach provides significant speed improvements, that with minor modifications of the original algorithm. Though we focused on the nonlinear Schrödinger and Schrödinger–Newton equations, the method principle is independent on the particular potential considered, so this approach can be extended to other Schrödingerlike equations. More generally, the idea behind the method presented in this note can be, at least in principle, generalized and extended to other equations with similar gauge conditions.
Appendix A Numerical simulations
For the onedimensional NLS, we considered the case (see 3) and we used as initial condition. We discretised space with points, in a computational box of length . The twodimensional NLS, which is often employed in optics to model selffocusing beams in a medium with a cubic nonlinearity [26, 27], presents a finite time (blowup) singularity [28]. More specifically, whenever the initial condition satisfies , the norm of the solution, or of one of its derivatives, becomes unbounded in finite time. For this reason, we stop the simulation at , i.e., before the singularity occurs. We set as initial condition and we consider the case, for which the corresponding initial energy is , hence quite close to the singular regime; for the spatial discretization we used and (squared box with side discretised with nodes). For both the one and two dimensional SN equations, we set and considered a Gaussian initial condition, where is the normalization factor, fixed such that . The parameters of the spatial discretization are and in 1D, while for the 2D case we set and . To solve the SN and the NLS equation we used the Dormand and Prince 5(4) integrator [22].
References
 [1] Franco Dalfovo, Stefano Giorgini, Lev P Pitaevskii, and Sandro Stringari. Theory of Bose–Einstein condensation in trapped gases. Reviews of Modern Physics, 71(3):463, 1999.
 [2] Akira Hasegawa. Optical solitons in fibers. In Optical Solitons in Fibers, pages 1–74. Springer, 1989.
 [3] Christian Kharif, Efim Pelinovsky, and Alexey Slunyaev. Rogue waves in the ocean. Springer Science & Business Media, 2008.
 [4] James Binney and Scott Tremaine. Galactic Dynamics: Second Edition. 2008.
 [5] FW Dabby and JR Whinnery. Thermal selffocusing of laser beams in lead glasses. Applied Physics Letters, 13(8):284–286, 1968.
 [6] Carmel Rotschild, Oren Cohen, Ofer Manela, Mordechai Segev, and Tal Carmon. Solitons in nonlinear media with an infinite range of nonlocality: first observation of coherent elliptic solitons and of vortexring solitons. Physical review letters, 95(21):213904, 2005.
 [7] Carmel Rotschild, Barak Alfassi, Oren Cohen, and Mordechai Segev. Longrange interactions between optical solitons. Nature Physics, 2(11):769–774, 2006.
 [8] F Siddhartha Guzman and L Arturo UrenaLopez. Gravitational cooling of selfgravitating Bose condensates. The Astrophysical Journal, 645(2):814, 2006.
 [9] Wayne Hu, Rennan Barkana, and Andrei Gruzinov. Fuzzy cold dark matter: the wave properties of ultralight particles. Physical Review Letters, 85(6):1158, 2000.
 [10] Angel Paredes and Humberto Michinel. Interference of dark matter solitons and galactic offsets. Physics of the Dark Universe, 12:50–55, 2016.
 [11] David JE Marsh and Jens C Niemeyer. Strong constraints on fuzzy dark matter from ultrafaint dwarf galaxy Eridanus ii. Physical review letters, 123(5):051103, 2019.
 [12] Lajos Diosi. Gravitation and quantummechanical localization of macroobjects. arXiv preprint arXiv:1412.0201, 2014.
 [13] Roger Penrose. On gravity’s role in quantum state reduction. General relativity and gravitation, 28(5):581–600, 1996.
 [14] Lawrence M Widrow and Nick Kaiser. Using the Schrödinger equation to simulate collisionless matter. The Astrophysical Journal, 416:L71, 1993.
 [15] C. Canuto, M. Y. Hussaini, A. Quarteroni, and Th. A. Zang. Spectral Methods. Fundamentals in Single Domains. Scientific Computation. Springer, 2nd edition, 2008.
 [16] J Douglas Lawson. Generalized Runge–Kutta processes for stable systems with large Lipschitz constants. SIAM J. Numer. Anal., 4(3):372–380, 1967.
 [17] Sergio Blanes, Fernando Casas, and Ander Murua. Splitting and composition methods in the numerical integration of differential equations. arXiv preprint arXiv:0812.0377, 2008.
 [18] Philipp Bader, Sergio Blanes, Fernando Casas, and Mechthild Thalhammer. Efficient time integration methods for Gross–Pitaevskii equations with rotation term. arXiv preprint arXiv:1910.12097, 2019.
 [19] Christophe Besse, Geneviève Dujardin, and Ingrid LacroixViolet. High order exponential integrators for nonlinear Schrödinger equations with application to rotating Bose–Einstein condensates. SIAM J. Numer. Anal., 55(3):1387–1411, 2017.

[20]
Roger Alexander.
Solving ordinary differential equations i: Nonstiff problems (e. hairer, sp norsett, and g. wanner).
Siam Review, 32(3):485, 1990.  [21] John Charles Butcher. Numerical methods for ordinary differential equations. John Wiley & Sons, 2016.
 [22] John R Dormand and Peter J Prince. A family of embedded Runge–Kutta formulae. J. Comp. Appl. Math., 6(1):19–26, 1980.
 [23] Endre Süli and David F Mayers. An introduction to numerical analysis. Cambridge university press, 2003.
 [24] Gerhard Wanner and Ernst Hairer. Solving ordinary differential equations II. Springer Berlin Heidelberg, 1996.
 [25] Kjell Gustafsson. Control theoretic techniques for stepsize selection in explicit Runge–Kutta methods. ACM Transactions on Mathematical Software (TOMS), 17(4):533–554, 1991.
 [26] VE Zakharov and VS Synakh. The nature of the selffocusing singularity. Zh. Eksp. Teor. Fiz, 68:940–947, 1975.
 [27] Kimiaki Konno and Hiromitsu Suzuki. Selffocussing of laser beam in nonlinear media. Physica Scripta, 20(34):382, 1979.
 [28] Catherine Sulem and PierreLouis Sulem. The nonlinear Schrödinger equation: selffocusing and wave collapse, volume 139. Springer Science & Business Media, 2007.
Comments
There are no comments yet.