    # Efficient simulation of DC-AC power converters using Multirate Partial Differential Equations

Switch-mode power converters are used in various applications to convert between different voltage (or current) levels. They use transistors to switch on and off the input voltage to generate a pulsed voltage whose arithmetic average is the desired output voltage of the converter. After smoothening by filters, the converter output is used to supply devices. The simulation of these switch-mode power converters by conventional time discretization is computationally expensive since a high number of time steps is necessary to properly resolve the unknown state variables and detect switch events of the excitation. This paper proposes a multirate method based on the concept of Multirate Partial Differential Equations (MPDEs), which splits the solution into fast varying and slowly varying parts. The method is developed to work with pulse width modulated (PWM) excitation with a constant switching cycle and varying duty cycle. The important case of varying duty cycles in the MPDE framework is addressed for the first time. Switching event detection is no longer necessary and a much smaller number of time steps for a decent resolution are required, thus leading to a highly efficient method.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Switch-mode power converters play nowadays a vital role in various applications [1, 2]. They are used in domestic devices, e.g., in mobile phone chargers and computer power supply, as well as in industrial applications, e.g., in high-voltage DC (HVDC) power transfer and speed control of electrical motors, to convert voltage (or current) between different levels. Switch-mode power converters use transistors to periodically switch on and off the input voltage to generate a pulsed voltage, whose arithmetic average is the desired output of the converter. The pulsed voltage is smoothened by filter circuits, which also act as energy buffer before it is used to supply the appliance.

An exemplary circuit of such a switch-mode power converter is depicted in Fig. 1 along with its solution in DC-DC conversion mode. As can be seen the converter output voltage, even after filtering, still comprises of high-frequency components generated by the pulsed excitation, which are usually referred to as ripples . Since the circuit is uncharged at the beginning, the converter needs some time to reach the steady state. As a result, the solution consists of a slowly varying envelope which is modulated with the fast periodically varying ripples. The method used to generate the control signals for the transistors is called pulse width modulation (PWM). There are different kinds of PWM , e.g., natural sampling with different carrier signals like sawtooth and triangle, and regular sampling. Since in this paper, we focus on natural sampling with a sawtooth carrier, we refer the interested reader to Vasca et al.  for more details on other modulations. Using natural sampling with a sawtooth carrier (trailing edge) leads to a pulsed signal as depicted in Fig. 2, where the switching (pulse) period and the duty cycle are the quantities defining the pulses. The switching frequency is constant while the duty cycle varies with time. Figure 1: Exemplary circuit of a switch-mode power converter and its solution at a switching frequency of fs=2kHz and constant duty cycle d=0.7. Figure 2: Pulsed excitation and the quantities defining it: switching period Ts, on interval △ton, duty cycle d.

Large-scale simulations of power converters require simplifications, e.g., ideal switching behaviour of the transistors and ideal diode . Nonetheless, the simulation of power converters with conventional time discretization is still computationally expensive since a high number of time steps is necessary to properly resolve the ripples in the solution. Special techniques are commonly applied to detect switching events and up to now most software uses techniques which reduces the order of the applied time discretization to lowest order, see Tant et al. . In this paper we develop a method which alleviates the need of high number of time steps by splitting the solution into fast and slowly varying parts. The resulting method is a (high-order) multirate approach, which is based on a concept called Multirate Partial Differential Equations (MPDEs) [4, 5]. For DC-DC power converters the method has already been proposed in earlier papers by Pels et al. [6, 7]

. We extend the concept to DC-AC power converters (also called inverters), in the following. In the process we restrict ourselves to power converter circuits which can be described by linear electrical elements and thus by a system of linear ordinary differential equations (ODEs). An extension to nonlinear problems can be achieved similarly as described in Pels et al.

. The method is verified using the example of the converter in Fig. 1, which is referred to as inverter in the following since it is operated in DC-AC mode. The main contribution is the efficient treatment of (sinusoidally) varying duty cycle.

The paper is structured as follows. Section 2 introduces the concept of MPDEs and their relation to the original ODEs describing the application. The focus of Section 3 lies on the modeling of the DC-AC converters using MPDEs. It describes how an efficient simulation can be achieved. The methods used for solving the MPDEs, i.e., a Galerkin approach for fast periodic parts of the solution and a conventional time discretization for the slowly varying parts are presented. Section 4 introduces B-spline basis functions used for the solution expansion and discusses their properties. It is shown that the basis functions are well suited for use with the proposed method since they allow to exploit smoothness almost everywhere but still lead to a cheap assembly of the arising matrices. Finally in Section 5, the method is applied to the example of the inverter and the computational efficiency is discussed. Section 6 concludes the paper.

## 2 Introduction to Multirate Partial Differential Equations

Let the mathematical model of the converter be described by linear ordinary differential equations in the form

where are matrices, which depend on the topology of the circuit and the electrical elements, is the unknown solution consisting of voltages and currents,

is the vector of initial conditions and

is the excitation of the circuit.

To obtain the multirate formulation, two artifical time scales and are introduced and (1) is rewritten in terms of the corresponding multivariate solution and excitation which yields the MPDEs [4, 5]

 A(∂ˆx(t1,t2)∂t1+∂ˆx(t1,t2)∂t2)+Bˆx(t1,t2)=ˆc(t1,t2). (2)

The relation between the ODEs (1) and MPDEs (2) is given by [4, 5]

 ˆx(t,t) =x(t) (3) ˆc(t,t) =c(t).

Therefore, if any right-hand side can be found which fulfills the relation , then the solution of the original system of ODEs can be extracted from the solution of the MPDEs by using , i.e., evaluating the multivariate solution along a diagonal line through the computational domain. It is important to note that the simulation efficiency depends on the choice of and of course the methods which are used for solving the MPDEs. In our case, we require fast changes to occur along the “fast time scale” and slow changes along the “slow time scale” . The multivariate right-hand side has then to be chosen appropriately, which depends on the application and is detailed in Section 5.

## 3 Multirate Modeling of DC-AC converters

The fast varying components, i.e., the ripples in the solution of the power converters, are assumed to be representable by basis functions , which depend on the fast time scale and on the duty cycle assumed to be slowly varying. The first assumption is necessary for the convergence of the method but obvious for most classical bases. The second assumption on the dynamics of the duty cycle is not crucial but will be important to obtain an efficient method. The basis functions are periodic (with period ), which is realized using the function , also called the relative time. The multivariate solution is expanded into the basis functions and slowly varying coefficients yielding [7, 6, 8]

 ˆxj(t1,t2)≐Np∑k=0wj,k(t1)pk(τ(t2),d(t1))=p⊤(τ(t2),d(t1))wj(t1), (4)

where

 p(τ)=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣p0p1(τ)p2(τ)⋮pNp(τ)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ ,  wj(t1) = ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣wj,0(t1)wj,1(t1)wj,2(t1)⋮wj,Np(t1)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. (5)

Inserting the solution expansion into the partial derivatives from (2) leads to

 ∂ˆxj(t1,t2)∂t1+∂ˆxj(t1,t2)∂t2 =∂p⊤(τ(t2),d)∂ddd(t1)dt1wj(t1) +p⊤(τ(t2),d(t1))dwj(t1)dt1+∂p⊤(τ(t2),d(t1))∂τdτ(t2)dt2wj(t1), (6)

To solve the MPDEs, two methods are applied, one method for each time scale. To resolve the fast changes represented by the basis functions , a Galerkin approach is applied. The slowly varying coefficients are solved afterwards with a conventional time discretization. Applying the Ritz-Galerkin approach to the MPDEs (2) with respect to and on the interval of periodicity yields

 Ts∫0(A(∂ˆx(t1,t2)∂t1+∂ˆx(t1,t2)∂t2)+Bˆx(t1,t2))pl(τ(t2),d)dt2 =Ts∫0ˆc(t1,t2)pl(τ(t2),d)dt2∀l=0,…,Np. (7)

Using integration by parts gives (function arguments are omitted in the following for the sake of readability)

 Ts∫0A∂ˆx(t1,t2)∂t1 +Bˆx(t1,t2)pldt2−Ts∫0Aˆx(t1,t2)∂pl∂τdτdt2dt2 +(ˆx(t1,t2)pl)∣∣t2=Tst2=0=Ts∫0ˆc(t1,t2)pldt2∀l=0,…,Np. (8)

Since the solution is periodic with respect to in the interval , the boundary term vanishes. Inserting (6) and substituting with leads to

 (9) −1∫0⎛⎜ ⎜⎝A⎡⎢ ⎢⎣p⊤w1⋮p⊤wNs⎤⎥ ⎥⎦⎞⎟ ⎟⎠∂pl∂τ1TsTs=1dτ=Ts∫0ˆc(t1,t2)pldt2∀l=0,…,Np.

Assembling all equations finally yields a linear system of ODEs with time-varying coefficients

 \cA(t1)dwdt1+\cB(t1)w(t1) = \cC(t1), (10)

where

 \cA(t1) =A⊗\cI(d(t1)),\cB(t1)=B⊗\cI(d(t1))+A⊗\cQ(d(t1))+dd(t1)dt1A⊗\cU(d(t1)), (11) \cC(t1) =Ts∫0ˆc(t1,t2)⊗p(τ(t2),d(t1))dt2. (12)

and

 \cI(d) (13) \cQ(d) = −1∫0∂p(τ,d)∂τp⊤(τ,d)dτ, (14) \cU(d) = Ts1∫0p(τ,d)∂p⊤(τ,d)∂ddτ, (15)

The equation system (10) can be solved using conventional time discretization with much larger time steps than for the original problem, since fast varying changes are already taken into account by the Galerkin approach and only the envelope is resolved. A disadvantage of (10) is the larger equation systems, which are times larger than the original ones (1).

To solve the system (10) numerically, initial values have to be specified. To achieve an efficient simulation the following choice has proven advantageous:

1. Calculate the steady-state of the system (10), i.e., and use the solution expansion , to reconstruct the solution (ripple).

2. Since the reconstructed solution in steady-state does (usually) not fulfill the initial condition of the original ODEs (1), i.e., , it has to be shifted by a constant as such that . The shift is accomplished by modifying the coefficients . In case of B-splines, due to partition of unity, this is achieved by choosing the coefficients as initial values for (10), where is the -th component of . This corresponds to the zero initial conditions as we use them for the original set of ODEs (1).

Other choices may still lead to the correct solution but may require higher effort from the time discretization algortihm.

## 4 Choice of basis functions

For the solution expansion (4) B-spline basis functions are employed. They allow a high-order basis while still capturing the continuity as it appears in the current ripples by construction. In the literature splines have also been used in the context of MPDE methods but for high-frequency problems. Brachtendorf et al.  for instant use cubic and exponential splines, which lead to a better approximation than Fourier basis functions for steep transients, while still enabling a simple extraction of the frequency spectrum and Bittner et al.  use an adaptive spline-wavelet to approximate solutions with steep transients. Both do not deal with solutions by construction. In contrast we take advantage of the a-priori knowledge of the excitation switching instant to contruct appropriate basis functions for the solution representation.

### 4.1 Introduction to B-splines

In this subsection we briefly introduce the most important properties and definitions concerning B-splines for this work. The information is taken from Piegl et al. , to which the interested reader is referred for more details.

To define B-splines basis functions we first setup a knot vector , which is sorted in ascending order, i.e., , . The basis functions of degree are now build up recursively from the piecewise constant basis function

 Pi,0(ξ)={1for ξi≤ξ<ξi+10otherwise (16)

by the Cox-DeBoor recurrence formula

 Pi,¯p(ξ)=ξ−ξiξi+¯p−ξiPi,¯p−1(ξ)+ξi+¯p+1−ξξi+¯p+1−ξi+1Pi+1,¯p−1(ξ). (17)

For our purposes we assume the knot vector to be open (also called clamped), i.e., it is of the form

 Ξ={a,…,a¯p+1,ξ¯p+1,…,ξm−¯p−1,b,…,b¯p+1}, (18)

where the first and the last knots appear times. The regularity of the B-spline basis functions across the knots, i.e., their continuity across the knot , , can be controlled using knot repetitions. The maximum regularity for degree B-splines is given by , which corresponds to knot repetitions of one for all knots except the knots at the boundary. To obtain less regularity, a knot repetition is introduced. Continuity is achieved by a knot repetition of . For instant, to represent a continuity across knot , the knot repetition is given by .

### 4.2 Choice of knot vector

For use in this work we define a knot vector as such, that the basis functions have a continuity at the point where the excitation changes its state. Defining the basis in terms of the relative time, i.e., , and the (fixed) duty cycle the simplest knot vector depending on the degree is given by

 Ξ¯p={0,…,0¯p+1,d,…,d¯p,1,…,1¯p+1}. (19)

Additional knot refinement (corresponding to -refinement in Finite Element Methods) leads to

 Ξ¯p,K={0,…,0¯p+1,α1d,…,αKd,d,…,d¯p,β1(1−d)+d,…,βK(1−d)+d,1,…,1¯p+1}, (20)

where is the number of additional knots inserted before and after the continuity, and , , in ascending order.

Using the refined knot vector (20) leads to the set of basis functions

 {P0,¯p(ξ,d),…,P2¯p+2K,¯p(ξ,d)} (21)

as depicted exemplary for and in Fig. 3, i.e., we have in total basis functions. The basis functions for the solution expansion (4) are finally given by

 pk(τ,d)=Pk,¯p(τ,d) (22)

The periodicity of the set of basis functions is ensured using periodic boundary conditions in the implementation.

Using the set of B-spline basis functions as introduced above leads to a cheap calculation of the matrices arising in the MPDE approach since their elements depend only linearly on the duty cycle. For a low-order basis like classical Finite Element hat functions (i.e., ) this is rather obvious whilst for higher order B-splines it is more involved due to their non-local support. Hence this property is analyzed in the following theorem. Figure 3: B-spline basis functions with degree ¯p=2 and refinement factor K=1.
###### Theorem 1 (Dependency of the matrices \cI,\cQ,\cU on the duty cycle).

Using the B-spline basis functions (21), only the matrix from (13) depends linearly on the duty cycle, i.e., , with and , and the matrices from (14), (15), respectively, are independent of the duty cycle.

###### Proof.

The set of basis functions (21) can be split into three parts: The basis functions which are supported on the knots in the interval , i.e., the basis functions ; the basis functions on the interval , i.e., ; and the remaining basis function which depends on knots from the entire interval , i.e. .

1. Calculating the basis functions using the Cox-DeBoor formula leads to a recursion of the form

 Pi,¯p(ξ,d)=ξ−γ1,idγ2,idPi,¯p−1(ξ,d)+γ3,id−ξγ4,idPi+1,¯p−1(ξ,d), (23)

where are constants depending on , but independent of the duty cycle . Therefore the resulting polynomials can be written as

 Pi,¯p(ξ,d)=¯γ¯p,iξ¯pd¯p+¯γ¯p−1,iξ¯p−1d¯p−1+⋯+¯γ0,iξ0d0 (24)
2. To calculate the B-splines , we first redefine the knot vector for ease of notation by shifting it

 ˆΞ¯p,K=Ξ¯p,K−d={−d,…,d,α1d−d,…,αKd−d,0,…,0,β1(1−d),…,βK(1−d),d,…,d}. (25)

The resulting B-splines are the same as with the original knot vector but shifted by . Using the Cox-DeBoor formula leads to

 ˆPi,¯p(ξ,d)=ξ−χ1,i(1−d)χ2,i(1−d)ˆPi,¯p−1(ξ,d)+χ3,i(1−d)−ξχ4,i(1−d)ˆPi+1,¯p−1(ξ,d), (26)

The final polynomials can be written as

 ˆPi,¯p(ξ,d)=¯χ¯p,iξ¯p(1−d)¯p+¯χ¯p−1,iξ¯p−1(1−d)¯p−1+⋯+¯χ0,iξ0(1−d)0 (27)
3. The single basis function consists of two terms due to the knot repetition. In the Cox-DeBoor formula, the first term in the sum stems from basis function in the interval , the second term stems from basis functions in the interval . Therefore as well as for the other basis functions, the left term can be written as a polynomial of the form (24), the right term can be written as a polynomial of the form (27).

Using the above knowlegde, the matrix (13) is of the form

 \cIi,j(d)=Tsd∫0Pi,¯p(ξ,d)Pj,¯p(ξ,d)dξ\cI0i,jd+Ts1−d∫0ˆPi,¯p(ξ,d)ˆPj,¯p(ξ,d)dξ\cI1i,j(1−d) (28)

where are constants. Therefore the matrix depends only linearly on the duty cycle.

The matrices (14) and (15) are given by

 \cQi,j=d∫0∂Pi,¯p(ξ,d)∂ξPj,¯p(ξ,d)dξ\cQ0i,j+1−d∫0∂ˆPi,¯p(ξ,d)∂ξˆPj,¯p(ξ,d)dξ\cQ1i,j (29)

and

 \cUi,j=d∫0Pi,¯p(ξ,d)∂Pj,¯p(ξ,d)∂ddξ\cU0i,j+1−d∫0ˆPi,¯p(ξ,d)∂ˆPj,¯p(ξ,d)∂ddξ\cU1i,j (30)

where , , and are constants. The matrices are thus independent of the duty cycle . ∎

This is a helpful result since it allows a cheap calculation of the matrices , and for different duty cycles and time steps.

## 5 Numerical results

To verify the proposed method and measure its efficiency, we test the method on the inverter example as depicted in Fig. 1. To measure the accuracy of solutions they are compared to a reference solution calculated in Simulink using PLECS. In the following we use three simulation setups: 1.) The MPDE approach; 2.) A conventional time discretization with switch detection implemented in MATLAB; 3.) A simulation in Simulink using PLECS. The conventional approaches 2.) and 3.) exploit a-priori knowledge on discontinuities, e.g. by a pre-defined event function. Figure 4: Excerpt of the excitation (top) and the solution of the inverter (bottom) using sinusoidally varying duty cycle.

The buck converter is described by the ordinary differential equation

 [L00C]ddt[cciLvC]+[RL1−11/R][iLvC]=[vi(t,d(t))0], (31)

where mH (inductance of the coil), F (capacitance of the capacitor),  m (coil resistance), and (load resistance) are fixed parameters, and , and are the voltage at the capacitor, the current through the coil and the PWM excitation, respectively. The excitation for the inverter is generated using natural sampling PWM with a sawtooth carrier. It can be written as

 vi(t,d(t))=¯visgn(d(t)−s(t)), (32)

where is the sawtooth carrier, is the peak excitation voltage, and is the sign function. The switching frequency is fixed at kHz. An excerpt of the excitation is shown in Fig. 4.

For the MPDE formulation we choose the multivariate excitation as , i.e., we force the duty cycle, which is slowly varying compared to the switching of the excitation, to evolve along the time scale and the switching to occur along the time scale . As it turns out the right-hand side of the ODEs after the MPDE approach, i.e., (12) is also linearly dependent on the duty cycle and thus allows a cheap evaluation. The proof is analog to the one detailled in Section 4. The duty cycle is assumed to be sinusoidal and given by

 d(t1)=0.5(¯vC,desired¯visin(2πfact1)+1), (33)

where is the desired peak output voltage of the converter and is the desired frequency of the AC output voltage. The constants are fixed to V, V (corresponding to V effective voltage) and Hz. The resulting inverter output, i.e., voltage at the capacitor and current through the coil, are depicted in Fig. 4. The multivariate solution of the MPDEs is depicted in Fig. 5. The solution of the original equations (1) is marked as black line. Figure 5: Excerpt of the multivariate solution of the inverter. The solution of the original ODEs is marked as black line.

The MPDE approach is implemented in MATLAB and the solver ode15s is used for the Simulink, the time discretization and the MPDE approach simulation. For the MPDE approach, the maximum order MaxOrder for the solver is set to 2 whilst for the other two methods the original setting of 5 is used. To compare the solutions, the relative error of the capacitor voltage on the simulation interval

 ϵv=||vC,ref(t)−vhC(t)||L2(Ω)||vC,ref(t)||L2(Ω) (34)

is approximated using the mid-point quadrature rule. The error of the current through the coil is defined analogously. The reference solution is calculated using a very fine tolerance of and a maximum step size of in Simulink.

MPDE approach simulation results are obtained for three different settings:

1. Lowest order approximation: and (corresponds to Finite Element hat functions),

2. Medium order approximation: and ,

3. High order approximation: and .

The error of the MPDE approach using these settings is calculated for different tolerances of the time discretization and is depicted in Fig. 6

for both the current through the coil and the voltage at the capacitor. The error is evaluated using 100 points per cycle. Reference solution values which are not available at the corresponding points are interpolated linearly. Fig.

6 shows that the error decreases with smaller tolerances for the solver until it stagnates, which is due to the fact that the approximation of the Galerkin approach bounds the accuracy. It is also visible that with better discretization settings for the Galerkin approach, the stagnation is shifted to smaller tolerances. For the three settings introduced above we now fix the tolerances for the solver as such, that we achieve highest accuracy without wasting computational effort in the stagnation region. The corresponding tolerances are marked as dots in Fig. 6 and summarized in Table 1. To be able to compare statistical data like the number of time steps, number of LU decompositions and number of function evaluations used by the solvers, the accuracy of conventional time discretization in MATLAB and PLECS are controlled by the tolerance . The error of both with respect to the tolerance is shown in Fig. 7 for both voltage and current. The errors corresponding approximately to the errors of the MPDE approach are listed in Table 1 with the associated solver tolerance. Statistical data of all approaches, i.e., number of time steps, number of failed steps, number of LU decompositions, number of function evaluations and number of forward/backward substitution (solution of linear systems) are compared in Table 2 for the settings in Table 1. For the PLECS simulation only the number of time steps is supplied. The other figures of merit can to our knowledge not be extracted from the Simulink solvers. For high solver tolerance, i.e., , the conventional time discretization in MATLAB is slightly more accurate than the PLECS results (see Fig. 7) since the MaxStep option is used to guarantee that no switching events are missed. This explains the considerably higher number of time steps compared to PLECS in Table 2, setting 1 (lowest order). For the other settings, the number of time steps between MATLAB and PLECS is almost similar. Figure 6: Error of the MPDE approach versus the solver tolerance for time discretization. The dots mark the tolerances chosen for the comparison to the conventional time discretization methods. (top) Error of the voltage at the capacitor. (bottom) Error of the current through the coil. Figure 7: Error of the time discretization in MATLAB and PLECS versus the solver tolerance.

As can be seen in Table 2 the MPDE approach is several times faster than the conventional methods with respect to the considered quantities. Especially when low or medium accuracy (setting 1 and 2) is necessary, the speedup is very high. Furthermore the efficiency of the method increases with higher switching frequency, see . If a switching frequency is assumed to be twice as high, for instance, the conventional time discretization will likely take twice as much time since twice as many ripples have to be resolved. The MPDE approach on the other hand will need approximately the same number of time steps since the envelope does not change. As a result the speedup compared with the conventional methods would be twice as high.

It should be noted, that the actual efficiency in terms of computing time depends on the efficiency of the solver used for the solution of the equation systems. Since the equation systems in the MPDE approach are larger than the ones of the original problem, the solver will take more time. This has to be taken into account when choosing the number of basis functions for the solution expansion. The MPDE approach will be especially efficient, if the computational effort for function evaluation is higher than the effort for solving the equation systems.

## 6 Conclusions

A multirate method for the efficient simulation of DC-AC switch-mode inverters has been presented. The system of equations describing the converter circuit is first formulated as a system of Multirate Partial Differential Equations (MPDEs), which allow to split the solution into components of different explicitly stated time scales. The MPDEs are efficiently solved using a combination of a Galerkin approach with B-spline basis functions for the solution expansion, and a conventional time discretization. The functionality of the proposed approach is verified on a simple inverter circuit with sinusoidal AC output. The computational efficiency is analyzed among others in terms of the number of time steps, number of LU decompositions, number of functions evaluations, compared to a conventional time discretization of the problem. It shows the high potential efficiency of the method. The method is particularly efficient in applications in which the function evaluations, i.e., the evaluations of the matrices/functions in the differential equation describing the application are computationally expensive.

## Acknowledgments

This work is supported by the “Excellence Initiative” of German Federal and State Governments and the Graduate School CE at TU Darmstadt.

## References

•  N. Mohan, T. M. Undeland, and W. P. Robbins, Power electronics: converters, applications and design. Wiley, 3 ed., 2003.
•  F. Vasca and L. Iannelli, eds., Dynamics and Control of Switched Electronic Systems. Advances in Industrial Control, Berlin: Springer, 2012.
•  J. Tant and J. Driesen, “On the numerical accuracy of electromagnetic transient simulation with power electronics,” IEEE Trans. Power Deliv., vol. PP, no. 99, pp. 1–1, 2018.
•  H. G. Brachtendorf, G. Welsch, R. Laur, and A. Bunse-Gerstner, “Numerical steady state analysis of electronic circuits driven by multi-tone signals,” Electr. Eng., vol. 79, no. 2, pp. 103–112, 1996.
•  J. Roychowdhury, “Analyzing circuits with widely separated time scales using numerical PDE methods,” IEEE Trans. Circ. Syst. Fund. Theor. Appl., vol. 48, pp. 578–594, May 2001.
•  A. Pels, J. Gyselinck, R. V. Sabariego, and S. Schöps, “Solving nonlinear circuits with pulsed excitation by multirate partial differential equations,” IEEE Trans. Magn., vol. 54, pp. 1–4, Mar. 2018.
•  A. Pels, J. Gyselinck, R. V. Sabariego, and S. Schöps, “Efficient simulation of dc-dc switch-mode power converters by multirate partial differential equations,” 2017. arXiv 1707.01947.
•  J. Gyselinck, C. Martis, and R. V. Sabariego, “Using dedicated time-domain basis functions for the simulation of pulse-width-modulation controlled devices – application to the steady-state regime of a buck converter,” in Electromotion 2013, (Cluj-Napoca, Romania), Oct. 2013.
•  H. G. Brachtendorf, A. Bunse-Gerstner, B. Lang, and S. Lampe, “Steady state analysis of electronic circuits by cubic and exponential splines,” Electr. Eng., vol. 91, no. 4, pp. 287–299, 2009.
•  K. Bittner and H. G. Brachtendorf, “Adaptive multi-rate wavelet method for circuit simulation,” Radioengineering, vol. 23, Apr. 2014.
•  L. Piegl and W. Tiller, The NURBS Book. Springer, 2 ed., 1997.