DeepAI
Log In Sign Up

Quantile Fourier Transform, Quantile Series, and Nonparametric Estimation of Quantile Spectra

11/10/2022
by   Ta-Hsin Li, et al.
ibm
0

A nonparametric method is proposed for estimating the quantile spectra and cross-spectra introduced in Li (2012; 2014) as bivariate functions of frequency and quantile level. The method is based on the quantile discrete Fourier transform (QDFT) defined by trigonometric quantile regression and the quantile series (QSER) defined by the inverse Fourier transform of the QDFT. A nonparametric spectral estimator is constructed from the autocovariance and cross-covariance functions of the QSER using the lag-window (LW) approach. Various quantile smoothing techniques are employed further to reduce the statistical variability of the estimator across quantiles, among which is a new technique called spline quantile regression (SQR). The performance of the proposed estimation method is evaluated through a simulation study.

READ FULL TEXT VIEW PDF

page 10

page 11

page 16

page 17

page 21

page 34

page 35

page 36

03/08/2019

Nonparametric smoothing for extremal quantile regression with heavy tailed distributions

In several different fields, there is interest in analyzing the upper or...
02/09/2021

Nonparametric C- and D-vine based quantile regression

Quantile regression is a field with steadily growing importance in stati...
08/22/2019

Efficient Capon-Based Approach Exploiting Temporal Windowing For Electric Network Frequency Estimation

Electric Network Frequency (ENF) fluctuations constitute a powerful tool...
01/17/2019

A Multi-Level Simulation Optimization Approach for Quantile Functions

Quantile is a popular performance measure for a stochastic system to eva...
10/24/2022

Stochastic Features of Purified Time Series

A new approach to calculating the finite Fourier transform is suggested ...
03/24/2019

Conservation of the t-digest Scale Invariant

A t-digest is a compact data structure that allows estimates of quantile...

1 Introduction

The concept of quantile spectra and cross-spectra was introduced in Li (2012, 2014) as a result of an asymptotic analysis of the quantile periodograms and cross-periodograms constructed from trigonometric quantile regression. Given

stationary time series with continuous marginal distribution functions , let be the bivariate distribution functions of these series and be their bivariate level-crossing rates. Then, the quantile spectra and cross-spectra of these series at a quantile level take the form

(1)

where

These quantile spectra and cross-spectra are analogous to the ordinary power spectra and cross-spectra in the sense that the

take place of the standard deviations and the

take place of the ordinary autocorrelation and cross-autocorrelation functions (Brockwell and Davis 1992). Because the coincide with the ordinary autocorrelation and cross-autocorrelation functions of the indicator processes , the quantile spectra and cross-spectra in (1) are closely related to the spectral analysis methods for indicator processes (Davis and Mikosch 2009; Dette et al. 2015; Baruník and Kley 2019). Quantile-frequency analysis (QFA) is destined to explore the properties of quantile spectra and cross-spectra as bivariate functions of and (Li 2020).

Estimating the quantile spectra and cross-spectra defined by (1) is not as straightforward as estimating the ordinary spectra and cross-spectra of indicator processes. In this paper, we propose a method that takes advantage of the concept of quantile discrete Fourier transform (QDFT) introduced in Li (2014).

The gist of the proposed method is as follows: First, we use the solutions of trigonometric quantile regression to construct the QDFT for each observed series on a set of quantile levels; then, we compute the inverse Fourier transform of the QDFT to produce sequences in the time domain, called quantile series (QSER), for each quantile level; and finally, we use the sample autocovariance and cross-covariance functions of the QSER, called quantile autocovariance and cross-covariance functions (QACF), to construct a nonparametric estimator of the quantile spectra and cross-spectra in (1) by following the conventional lag-window (LW) approach. Furthermore, we explore several smoothing techniques to control the statistical variability of the spectral estimator across quantiles and thereby achieve additional reduction of estimation error. These techniques include conventional smoothing splines and a new technique called spline quantile regression (SQR). The latter employs splines to represent the coefficients in the trigonometric quantile regression and imposes penalties for their roughness as functions of .

The remainder of this paper is organized as follows. In Section 2, we describe the QDFT, SQER, and QACF. In Section 3, we introduce the LW spectral estimator and discuss the techniques of quantile smoothing. In Section 4, we provide the result of a simulation study on the performance of the proposed estimation method. Concluding remarks are given in Section 5. In addition, we provide the details of the SQR technique in Appendix I, a summary of R functions for the proposed method in Appendix II, and additional results of the simulation study in Appendix III.

2 Quantile Fourier Transform and Quantile Series

Let () be time series of length , and let be the Fourier frequencies. For each , consider the following trigonometric quantile regression solution at quantile level :

(2)

where is the objective function of quantile regression (Koenker 2005). In addition, for (i.e., when is even), let

(3)

and for (i.e., ), let

Based on these trigonometric quantile regression solutions, we define the quantile discrete Fourier transform (QDFT) of the th series at quantile level as

(4)

where . This definition of QDFT is motivated by the fact that the ordinary DFT can be constructed in the same way by replacing with the objective function of least-squares regression.

It is easy to see that the sequence is conjugate symmetric:

(5)

Therefore, in order to compute the QDFT, one only need to solve the quantile regression problems (2)–(3) for (i.e., for when

is odd and

when

is even); the conjugate symmetry provides the values of QDFT for the remaining frequencies. Linear programming algorithms such as those implemented by the

rq function in the R package ‘quantreg’ (Koenker 2005) can be employed to compute the quantile regression solutions efficiently, including parallelization for different frequencies.

Now, consider the inverse Fourier transform of the QDFT for each :

(6)

We call this sequence the quantile series (QSER) of at quantile level . Note that the QSER is a real-valued time series due to the conjugate symmetry in (5). Also note that the sample mean of the QSER, , coincides with , which is the -quantile of , because by definition.

Based on the QDFT in (4), the (first kind) quantile periodogram of at level (Li 2012) can be written as

(7)

and the quantile cross-periodogram of and () at level (Li 2014) can be written as

(8)

This way of expressing the quantile periodogram and cross-periodogram (QPER) in terms of the QDFT is consistent with the conventional definition of periodogram and cross-periodogram in terms of the ordinary DFT (Brockwell and Davis 1992).

From the QSER in (6) we obtain the sample autocovariance and cross-covariance functions

(9)

We call these functions the quantile autocovariance and cross-covariance functions (QACF) at level . It is easy to show that the usual relationship between the ordinary autocovariance functions and the ordinary periodograms holds true for the QACF and the QPER:

(10)

where and . This relationship provides the basis for the construction of a nonparametric estimator of quantile spectra and cross-spectra.

Note that the Fourier transform of QPER was employed in Chen, Sun, and Li (2021) to construct spectral estimators for univariate time series, which produces instead of . The alias may be negligible in applications where the QACF decays quickly in and where only a first few autocovariances with small are used in subsequent modeling and analysis. The alias-free QACF is produced by (9) through the QDFT and QSER.

3 Lag-Window Spectral Estimator

Let be a nonnegative even function with bandwidth parameter . An example of such functions is the Tukey-Hanning window (Priestley 1981)

(11)

Inspired by the conventional lag-window (LW) approach (Priestley 1981) in light of the relationship (10), we propose the following estimator,

(12)

for estimating the quantile spectral matrix .

In this LW estimator, the bandwidth parameter controls the statistical variability with respect to for fixed . Further control of the statistical variability with respect to can be accomplished by what we call quantile smoothing (QS). We consider three approaches to quantile smoothing distinguished by where it takes place in the process.

In the first approach, we first compute the LW estimate on the quantile-frequency grid , and then apply a smoother to the sequence for each fixed . This constitutes what we call the LWQS estimator. In the second approach, we first apply the smoother to the QDFT sequence for each fixed , and then use the QSER and QACF from the quantile-smoothed QDFT to construct what we call the QSLW estimator according to (12). Although any smoother can be used in these estimators, we focus on two spline-based smoothers (Wahba 1975) in our experiment: the smooth.spline function in the R package ‘stats’ for its simplicity of computation (R Core Team 2022), and the gamm function in the R package ‘mgcv’ for its capability of handling correlated data (Wood 2022).

The third approach tackles the problem at the root by employing the spline quantile regression (SQR) method described in Appendix I to produce a smoothed version of quantile regression solution. The SQR is a penalized quantile regression method where the coefficients of the regressor are represented as spline functions of and penalized for their roughness in terms of the -norm of second derivatives. The SQR problem can be reformulated as a linear program (LP) and solved efficiently by a modified version of the rq.fit.fnb function in the R package ‘quantreg’ (Koenker 2005). Given the solutions from SQR, we first compute what we call the spline QDFT (SQDFT) according to (4) and then use the QSER and QACF from the SQDFT to construct what we call the SQRLW estimator according to (12). The smoothness of this estimator is controlled by a smoothing parameter . See Appendix I for details.

To measure the accuracy of spectral estimation, we employ the Kullback-Leibler divergence

This spectral measure is closely related to Whittle’s likelihood (Priestley 1981) and has been used as similarity metrics for time series clustering and classification (Kakizawa, Shumway, and Tanaguch 1998).

4 A Simulation Study

To investigate the performance of the estimation method outlined in the previous section, we use a set of simulated data with and . The first series, , is a nonlinear mixture of these components , , and :

(13)

where and . The second series, , is a delayed copy of :

(14)

The three components , , and

are zero-mean unit-variance autoregressive (AR) processes, satisfying

where , , and with , , and where , , and

are mutually independent Gaussian white noise. In other words,

is a low-pass series with spectral peak at frequency , is a high-pass series with spectral peak at frequency , and is a band-pass series with spectral peak at frequency .

Figure 1 shows the quantile spectrum and cross-spectrum of the series in (13)-(14) evaluated at and . These spectra are the ensemble mean of quantile periodograms and cross-periodograms from 5000 Monte Carlo runs.

Figure 1: Simulated quantile spectrum and cross-spectrum according to (13) and (14). Real and complex parts of the cross-spectrum (second row) are known as co-spectrum and quadrature-spectrum, respectively. All spectra are shown as functions of linear frequency .
Figure 2: An example of simulated time series according to (13) and (14).
Figure 3: Quantile periodogram and cross-periodogram for the series shown in Figure 3.
Figure 2: An example of simulated time series according to (13) and (14).
Figure 4: Quantile series defined by (6) for the series shown in Figure 3.
Figure 5: Selected quantile series at quantile levels (first row), (second row), and (third row). Dashed horizontal lines depict the sample mean.
Figure 4: Quantile series defined by (6) for the series shown in Figure 3.

Figure 3 shows the series from one of the simulation runs. The corresponding quantile periodogram and cross-periodogram are shown in Figure 3. Figures 5 and 5 contain the image plot of the QSER of these series at all quantile levels and the time series plot of selected QSER at three quantile levels , , and (see Appendix III for the QACF plots at these quantile levels). It is interesting to observe the stretched values of the QSER, upward for and downward for .

Figure 6 shows the LW spectral estimates obtained from the series in Figure 3. These estimates are constructed according to (12) using the Tukey-Hanning window (11) with . They can be viewed as a smoothed version of the raw quantile periodogram and cross-periodogram in Figure 3 with respect to the frequency variable. The KLD of these estimates equals 0.198.

Figure 7 shows the LWQS estimates obtained by applying quantile smoothing to the LW estimates in Figure 6 using the smooth.spline function with the smoothing parameter chosen by the generalized cross-validation (GCV) criterion. The resulting KLD equals 0.194. In this case, the KLD is reduced slightly, but the effect of quantile smoothing is barely noticeable when compared to Figure 6.

A better result is shown in Figure 8. These estimates are also obtained by applying smooth.spline to the LW estimates in Figure 6, but the smoothing parameter spar is set to 0.9 instead of being determined by GCV. This results in smoother appearances when compared to the estimates in Figures 6 and 7. The KLD is reduced significantly from 0.198 and 0.194 to 0.109.

A closer examination of the LW estimates reveals strong positive correlations across quantiles. These correlations are not handled effectively by smooth.spline with GCV. To take the correlations into account, we use the gamm function in the ‘mgcv’ package (Wood 2022). This function jointly estimates the smoothing splines and the parameters of a user-specified correlation model while retaining GCV for smoothing parameter selection.

Figure 9 shows the result of applying gamm to the LW estimates in Figure 6 assuming the correlation structure of an AR(1) process. The KLD of these estimates equals 0.130, which is a significant improvement over smooth.spline with GCV. This improvement is achieved at a higher computational cost: a 100-fold increase in computing time when compared to smooth.spline. Computation can be accelerated by parallelization for different frequencies.

Figure 6: LW estimates of the quantile spectrum and cross-spectrum shown in Figure 1 from the series shown in Figure 3. KLD = 0.198.
Figure 7: LWQS estimates of the quantile spectrum and cross-spectrum shown in Figure 1 from the series shown in Figure 3 using smooth.spline with GCV. KLD = 0.194.
Figure 8: LWQS estimates of the quantile spectrum and cross-spectrum shown in Figure 1 for the series shown in Figure 3 using smooth.spline with spar = 0.9. KLD = 0.109.
Figure 9: LWQS estimates of the quantile spectrum and cross-spectrum shown in Figure 1 from the series shown in Figure 3 using gamm with correlated residuals. KLD = 0.130.
Figure 10: Kullback-Leibler divergence (KLD) of (a) the LWQS estimator and (b) the QSLW estimator where quantile smoothing is performed by smooth.spline with different smoothing parameter spar (NA for no quantile smoothing, NULL for GCV). Results are based on 1000 Monte Carlo runs.

Figure 10(a) and Table 3 provide a more comprehensive assessment of the LWQS estimator using smooth.spline and gamm. The results are based on 1000 Monte Caro runs. As shown in Figure 10(a), smooth.spline with GCV offers a slight improvement over no quantile smoothing; a significant improvement can be made by setting spar manually with a range of choices. Table 3 confirms the superiority of gamm over smooth.spline for the LWQS estimator when the smoothing parameter is selected by GCV.

Quantile Smoothing Method
no quantile smoothing 0.237 (0.031) 0.210 (0.026) 0.221 (0.025) 0.244 (0.025)
smooth.spline with GCV 0.234 (0.031) 0.205 (0.026) 0.216 (0.025) 0.238 (0.025)
smooth.spline with spar = 0.9 0.148 (0.020) 0.111 (0.015) 0.113 (0.015) 0.125 (0.016)
gamm with correlated residuals 0.168 (0.028) 0.135 (0.023) 0.141 (0.024) 0.157 (0.025)

Results are based on 1000 Monte Carlo runs. Numbers in parentheses are standard deviations.

Table 2: Kullback-Leibler Divergence of the QSLW Estimators
Quantile Smoothing Method
no quantile smoothing 0.237 (0.031) 0.210 (0.026) 0.221 (0.025) 0.244 (0.025)
smooth.spline with GCV 0.229 (0.029) 0.203 (0.025) 0.216 (0.024) 0.240 (0.024)
smooth.spline with spar = 0.8 0.177 (0.021) 0.163 (0.024) 0.183 (0.027) 0.209 (0.028)
gamm with correlated residuals 0.218 (0.031) 0.214 (0.035) 0.239 (0.037) 0.270 (0.039)

Results are based on 1000 Monte Carlo runs. Numbers in parentheses are standard deviations.

Table 3: Kullback-Leibler Divergence of the SQRLW Estimators
Quantile Smoothing Method
no quantile smoothing 0.237 (0.031) 0.210 (0.026) 0.221 (0.025) 0.244 (0.025)
SQR with 0.185 (0.022) 0.168 (0.022) 0.186 (0.023) 0.212 (0.025)
SQR with (weighted) 0.191 (0.024) 0.171 (0.022) 0.187 (0.023) 0.212 (0.024)

Results are based on 1000 Monte Carlo runs. Numbers in parentheses are standard deviations.

Table 1: Kullback-Leibler Divergence of the LWQS Estimators

A similar assessment is given by Figure 10(b) and Table 3 for the QSLW estimator where quantile smoothing is performed on the QDFT sequences instead of the LW estimates. Figure 10(b) shows that GCV is not particularly effective for the QSLW estimator using smooth.spline but significantly smaller estimation errors can be produced for a range of choices of spar. A comparison with Figure 10(a) reveals that the best result of SQLW is inferior to the best result of LWQS. Table 3 shows that gamm is not as effective for the QSLW estimator as it is for the LWQS estimator, which may be partly attributed to increased variability. See Appendix III for the plots of QSLW estimates obtained from the series in Figure 3 using smooth.spline and gamm.

Finally, consider the SQR approach to quantile smoothing. Figure 11 shows an example of trigonometric parameters obtained from SQR (see Appendix I for details). Apparently, with a suitable choice of the smoothing parameter , the SQR method is able to achieve the effect of smoothing across quantiles. Due to joint estimation with additional parameters, the computing time incurs a 25-fold increase when compared to the one-quantile-a-time solutions using rg.

Figure 12 shows the SQRLW estimates produced by SQR with for the series in Figure 3. The KLD of these estimates equals 0.155. This is superior to the results of the LW estimates in Figure 6 and the LWQS estimates in Figure 7, but inferior to the results of the LWQS estimates in Figures 8 and 9.

Figure 11: Trigonometric SQR solutions (solid line) and (dashed line) with (a) and (b) at for a series of length given by (13). Circles and triangles represent the corresponding solutions of one-quantile-a-time quantile regression.
Figure 12: SQRLW estimates of the quantile spectrum and cross-spectrum shown in Figure 1 from the series shown in Figure 3 using SQR with . KLD = 0.155.

Figure 13 and Table 3 compare the KLD of the SQRLW estimator with unweighted and weighted penalty in SQR for different choices of based on 1000 Monte Carlo runs. Similar to the QSLW estimator using smooth.spline, the SQRLW estimator is able to reduce the estimation error of the unsmoothed LW estimator for a range of choices of ; however, it is unable to achieve the best results produced by the LWQS estimator. With weighted penalty, the performance does not deteriorate as rapidly as it does with unweighted penalty when becomes too large. The unweighted SQR is more effective for other choices of .

Figure 13: Kullback-Leibler divergence (KLD) of the SQRLW estimator with different choice of bandwidth parameter and smoothing parameter (NA for no quantile smoothing) by (a) unweighted SQR and (b) weighted SQR. Results are based on 1000 Monte Carlo runs.

5 Concluding Remarks

In this paper, we propose a nonparametric method for estimating the quantile spectra and cross-spectra introduced through trigonometric quantile regression in Li (2012; 2014). This method is based on the quantile discrete Fourier transform (QDFT) defined by the trigonometric quantile regression and the quantile series (QSER) defined by the inverse Fourier transform of the QDFT. The autocovariance and cross-covariance function of the QSER facilitates the construction of a lag-window (LW) spectral estimator for the quantile spectra and cross-spectra. While the window function controls the statistical variability of this estimator across frequencies, we consider three approaches to further reduction of the statistical variability across quantiles. These approaches include the application of smoothing spline techniques to the LW estimates or the QDFT sequences and the use of a new technique called spline quantile regression (SQR) which produces smoothed quantile regression solutions. All these approaches lead to improved spectral estimates. Among these methods, applying a smoother directly to the LW estimates turns out to be more effective, provided the strong positive correlations across quantiles are taken into account. It remains a challenge to automate the smoothing process with greater effectiveness and computational efficiency.

References

Andriyana, Y., Gijbels, I., and Verhasselt, A. (2014) P-splines quantile regression estimation in varying coefficient models. Test, 23, 153–194. DOI 10.1007/s11749-013-0346-2.

Brockwell, P., and Davis, R. (1991) Time Series: Theory and Methods, 2nd edn, section 11.6. New York: Springer.

Baruník, J., and Kley, T. (2019) Quantile coherency: A general measure for dependence between cyclical economic variables. Econometrics Journal, 22, 131–152.

Berkelaar, M. (2022) Package ‘lpSolve’. https://cran.r-project.org/web/packages/lpSolve/lpSolve.pdf.

Chen, T., Sun, Y., and Li, T.-H. (2021) A semiparametric estimation algorithm for the quantile spectrum with an application to earthquake classification using convolutional neural network.

Computational Statistics & Data Analysis, 154, 107069.

Davis, R., and Mikosch, T. (2009) The extremogram: A correlogram for extreme events. Bernoulli, 15, 977–1009.

Dette, H., Hallin, M., Kley, T. and Volgushev, S. (2015) Of copulas, quantiles, ranks and spectra: an -approach to spectral analysis. Bernoulli, 21, 781–831.

Kakizawa, Y., Shumway, R. and Tanaguchi, M. (1998) Discrimination and clustering for multivariate time series. Journal of the American Statistical Association, 93, 328–340.

Koenker, R. (2005) Quantile Regression. Cambridge, UK: Cambridge University Press.

Koenker, R. and Ng, P. (2005) A Frisch-Newton algorithm for sparse quantile regression. Acta Mathematicae Applicatae Sinica, 21, 225–236.

Koenker, R., Ng, P., and Portnoy, S. (1994) Quantile smoothing splines. Biometrika, 81, 673–680.

Li, T.-H. (2012) Quantile periodograms. Journal of the American Statistical Association, 107, 765–776.

Li, T.-H. (2014) Time Series with Mixed Spectra. Boca Raton, FL: CRC Press.

Li, T.-H. (2020) From zero crossings to quantile-frequency analysis of time series with an application to nondestructive evaluation. Applied Stochastic Models for Business and Industry, 36, 1111–1130

Portnoy, S., and Koenker, R. (1997) The Gaussian hare and the Laplacian tortoise: computability of squared-error versus absolute-error estimators. Statistical Science, 12, 279–300.

Priestley, M. (1981) Spectral Analysis and Time Series, p. 443. New York: Academic Press.

R Core Team (2022) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

Wahba, G. (1975) Smoothing noisy data with spline functions. Numerische Mathematik, 24, 383–393.

Wood, S. (2022) Package ‘mgcv’. https://cran.r-project.org/web/packages/mgcv/mgcv.pdf.

Appendix I: Spline Quantile Regression

Let be a sequence of observations and be the corresponding values of a -dimensional regressor. Given an increasing sequence of quantile levels, the spline quantile regression (SQR) problem can be stated as

(15)

where is the functional space spanned by cubic -spline basis functions for some and the are penalty parameters.

The SQR problem in (15) is different from the problem of quantile smoothing splines considered by Koenker, Ng, and Pornoy (1994) where splines are used to represent nonparametric functions of the independent variable. It is also different from the problem considered by Andriyana, Gijbels, and Verhasselt (2014) where splines are used to represent regression coefficients as functions of time.

In (15), the -norm of second derivatives is employed as the roughness measure of in order to retain the LP characteristics of the original quantile regression problem (Koenker 2005). The -norm, popular for spline smoothing (Wahba 1975), can also be employed in (15), but this makes the problem a quadratic program and we will not consider it here. Furthermore, the quantile-dependent penalty parameters in (15) are able to accommodate the simple case or the more sophisticated case , where is a suitable weight sequence controlling the excessive variability of quantile regression at very high or very low quantiles, e.g., .

Let denote the set of spline basis functions. Then, any function

can be expressed as

where

With this notation, the problem (15) can be restated as

(16)

and

(17)

Like the ordinary QR problem (Koenker 2005), the SQR problem (16) can be reformulated as LP:

(18)

where and are the -dimensional and

-dimensional vectors of 1’s,

is the ordinary regression design matrix, and is the total number of decision variables to be optimized. Among the decision variables in (18), and are primary variables which determine the desired solution (17) with

(19)

The remaining variables , , , and are auxiliary variables which are introduced just for the purpose of linearizing the objective function in (16).

In the canonical form, the LP problem (18) can be expressed as

(20)

where

and

This problem can be solved numerically by standard LP solvers available in many software packages such as the lp function in the R package ‘lpSolve’ (Berkelaar 2022).

Associated with the primal LP problem (20) is a dual LP problem of the form

(21)

where may be interpreted as the Lagrange multiplier for the equality constraints in (20). By partitioning according to the structure of such that , the inequalities can be written more explicitly as

These inequalities are equivalent to

By a change of variables,

we obtain

Therefore, the dual LP problem (21) is equivalent to

(22)

where

This dual formulation facilitates the use of a modified version of the rq.fit.fnb function in the ‘quantreg’ package to solve the SQR problem. The rq.fit.fnb function was developed by Portnoy and Koenker (1997) for the ordinary QR problem , which has a dual formulation of the form . To solve the SQR problem, we modified this function along the lines of rq.fit.lasso (Koenker, Ng, and Portnoy 1994). In this modified version, we replaced the response vector by , the design matrix by , and the right-hand-side vector in the equality constraints by ; we also set the initial value of to .

To justify this method of solving the SQR problem, it suffices to show that the SQR problem has a primal-dual formulation that conforms to the requirement of the underlying interior point algorithm of Portnoy and Koenker (1997). Toward that end, let

With this change of variables, we can rewrite the equality constraints in (20) as

(23)

Under these constraints, we have and . Substituting these expressions in (20) yields