Kernel Density Estimation by Stagewise Algorithm with a Simple Dictionary

07/27/2021
by   Kiheiji Nishida, et al.
0

This study proposes multivariate kernel density estimation by stagewise minimization algorithm based on U-divergence and a simple dictionary. The dictionary consists of an appropriate scalar bandwidth matrix and a part of the original data. The resulting estimator brings us data-adaptive weighting parameters and bandwidth matrices, and realizes a sparse representation of kernel density estimation. We develop the non-asymptotic error bound of estimator obtained via the proposed stagewise minimization algorithm. It is confirmed from simulation studies that the proposed estimator performs competitive to or sometime better than other well-known density estimators.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

03/03/2022

Kernel Density Estimation by Genetic Algorithm

This study proposes a data condensation method for multivariate kernel d...
04/08/2019

Modeling a Hidden Dynamical System Using Energy Minimization and Kernel Density Estimates

In this paper we develop a kernel density estimation (KDE) approach to m...
04/18/2015

Fast optimization of Multithreshold Entropy Linear Classifier

Multithreshold Entropy Linear Classifier (MELC) is a density based model...
02/04/2019

Numerical performance of Penalized Comparison to Overfitting for multivariate kernel density estimation

Kernel density estimation is a well known method involving a smoothing p...
10/31/2007

Supervised Machine Learning with a Novel Pointwise Density Estimator

This article proposes a novel density estimation based algorithm for car...
05/04/2018

Axiomatic Approach to Variable Kernel Density Estimation

Variable kernel density estimation allows the approximation of a probabi...

Acknowledgments

The second author gratefully acknowledges the financial support from KAKENHI 19K11851.

1 Introduction

Let , , be -dimensional i.i.d. sample generated from the true density function on . General representation of multivariate Kernel Density Estimator (KDE) is written as

(1)

where , , is a symmetric and positive definite -dimensional bandwidth matrix used for the data , is a non-negative real valued bounded kernel function, and , , is the weighting parameters assigned for the data .

One approach to implement (1) is setting and estimating efficiently. This approach emphasizes finding efficient full-bandwidth matrix, instead of putting simple assumptions on weighting parameters. Duong and Hazelton (2003) propose the Direct Plug-In (DPI) bandwidth matrix while setting a bivariate full-bandwidth matrix. We denote the estimator using the DPI bandwidth matrix to be .

Another approach is the Redused Set Density Estimator (RSDE) in Girolami and He (2003), which firstly employs the scalar bandwidth matrix , where is the

-dimensional identity matrix and the constant

is determined by cross-validation. Second, the parameters , , are estimated to minimize Integrated Squared Error (ISE) under the constraint , , . RSDE imposes simple assumptions on the bandwidth matrix, but requires more efforts in calculating the weighting parameters. RSDE also allows for some ’s, realizing the sparse representation of kernel density estimation because those data points are not used in the estimation. We denote the estimator using RSDE to be .

Other than these approaches, algorithm-based methods have also been developed such as projection pursuit density estimation (Friedman et al. 1984) and boosting by Ridgeway (2002). In relation to boosting, Klemelä (2007) developed a density estimation using stagewise algorithm and its non-asymptotic error bounds. Naito and Eguchi (2013) developed the stagewise algorithm under the setting of -divergence. The stagewise algorithm requires a dictionary beforehand where the words consist of density functions; it starts by choosing a density function from the dictionary which minimizes the empirical loss, and proceeds in a stage-wise manner, adding new simple functions to the convex combination.

In this paper, we consider applying the stagewise algorithm in Klemelä (2007) and Naito and Eguchi (2013) for the kernel density estimator in (1). We randomly split an i.i.d. sample into the two disjoint sets, one to be used for the means of the kernel functions in the dictionary and the other for calculating the criterion function, and implement the stagewise algorithm. The outcome is expressed in the form of (1) and brings us the data-adaptive weighting parameters , while virtually realizing the data-adaptive bandwidth matrix through a variation in the bandwidths in the dictionary. It also chooses the data points of no use for the estimation, to obtain a sparse representation of kernel density estimation just like RSDE. We are especially interested in ascertaining whether or not our estimators can outperform its competitors, KDE and RSDE, in terms of estimation error and the degree of data condensation, while making the dictionary as simple as possible in terms of its bandwidth matrix structure.

The remainder of this paper is organized as follows. In Section 2, we introduce the evaluation criterion for our proposed method, -divergence. Section 3 describes our proposed method. Section 4 shows the theoretical results of our estimator, the non-asymptotic error bound of the estimator. We show the simulation results and real data example used in our method in Section 5. The discussion and conclusions are presented in Section 6. In Appendices A and B, we provide the proofs of the theorems for the non-asymptotic error bounds of the proposed estimator and its normalized version in Section 4 respectively. In Appendices C and D, we show details of the related results in Section 4.

2 -divergence

To compose the algorithm, we employ -divergence defined as the distance between the fixed and any density function written as

(2)

where is a strictly convex function on , and . The equality of (2) holds if and only if except the set of measure zero. The non-negative property of is explained by the convex property of . The functional form of -divergence is similar to that of the Bregman divergence (see Bregman 1967; Zhang et al. 2009, 2010).

Extracting the part relating to from (2), we obtain

(3)

Replacing the first term in the right-hand side of (3) with its empirical form, we obtain the empirical

-loss function written as

(4)

Minimizing (4) with respect to is equivalent to minimizing the empirical form of (2) for a fixed .

If we specify the convex function to be the following -power function with a tuning parameter :

we obtain the resulting divergence function

(5)

which is called the -power divergence (see Basu et al. 1998; Minami and Eguchi 2002). We notice that the limit of is equivalent to Kullbuck-Leibler (KL) divergence as goes to zero because . Alternatively, when , is equivalent to norm. We also notice that the -power divergence with exhibits robustness property, judging from the functional form of (5); employing -divergence enables us to consider a variety of density estimators in one function.

3 The method

Supposed that we have i.i.d. sample , , generated from . For this i.i.d. sample, we define , , and use it for the dictionary. For the rest of the i.i.d. sample, we define , , , and use it for the algorithm to calculate empirical loss. Let be a set of -dimensional scalar bandwidth matrices , ,

Each element of is predetermined by users before starting the algorithm. Then, we define the dictionary,

(6)

where

is a density function with its mean and variance-covariance matrix respectively

and . Each word in is denoted to be , where each index number corresponds to a combination , , , , one-to-one.

Stagewise minimization algorithm

Let () be the number of iterations for the algorithm. Let be the approximation bound. We employ the mixing coefficients,

(7)

From (4), the empirical loss is calculated by

where is a function on given . Then, the algorithm for the stagewise minimization estimator consists of the following steps:

Step1. At the initial stage , choose so that

Step2. For , let

where is chosen so that

Step3. Let .

At the final step , we obtain the sequence of the words chosen at each stage,
,,…,, and the density estimator using the algorithm has the form of

(8)

We can verify that . When we employ the -power divergence function with , the estimator (8) is rewritten as

(9)

Since (9) is a convex combination of the words , KDE is a sort of an estimator in the form of (8).

Remark 1

. The integral of the estimator is not always 1. Hence, we may consider its normalized form

Remark 2

.

The proportion of the dictionary data points in the total sample size influences the performance of the density estimation, and parameter serves as a kind of smoothing parameter. Letting the problem of optimizing aside, we assume parameter is given before starting the algorithm.

4 Theoretical results

We show the theoretical results of our proposed estimator. The main result is to show the non-asymptotic error bound of the estimator in Theorem 1. We also show the non-asymptoric error bound of the normalized version of the proposed estimator in Theorem 2.

In the theorems, we use the following notations. Let be the set of convex hull composed by ,

We consider a triplet

The set of these triplets is denoted by

For [0, 1] and , we define

4.1 The non-asymptotic error bound of the estimator

To obtain the non-asymptotic error bound of the estimator, we use Assumption 1 as follows.

Assumption 1

.
(i) The convex function is twice differentiable.
(ii) There exists a constant such that


Example 1. (The case of KL divergence)
If we introduce a constant defined as

in the case where KL divergence is employed for evaluating the algorithm, we see that for any and any . The proof is provided in Appendix C. We also evaluate the constant and derive its upper bound (18) in Appendix C, employing Gaussian densities with scalar bandwidth matrix in the dictionary. If we consider the upper bound (18) in the case of KL divergence, Assumption 1 is justified.

Then, we obtain Theorem 1. The proof is given in Appendix A.

Theorem 1

. For the density estimator in (8), it holds under Assumption 1 that

(10)

where is the centered operator,

The symbol represents the expectation taken with respect to the sample used for the dictionary, , whereas does the one used for the algorithm, . The error bound in (10) diminishes as increases.

Remark 3

.

In the right-hand side of (10), the expected value

appears. In the case of KL divergence (Example 1), it suffices that the fourth moment of

exists to ensure the finiteness of . See Appendix D in detail.

4.2 The error bound of the normalized form of the estimator

To obtain the non-asymptotic error bound of the normalized form of the estimator, we use Assumption 2.

Assumption 2

. There exist two constants and such that

for any and for any with , where is the normalized form of .


Then, we obtain Theorem 2. The proof is given in Appendix B.

Theorem 2

. For the normalized form of , it follows from Assumptions 1 and 2 that

Remark 4

.

Theorem 2 reveals that the bound for the normalized estimator corresponds to that for given in Theorem 1 along with an extra term.

Remark 5

.

We obtain , when the power divergence with is employed. In such a situation, the result of Theorem 2 coincides with that of Theorem 1.

5 Applications

5.1 Practical setting

For the sake of practical use, we consider the dictionaries 1 and 2, which are denoted as and , respectively. In dictionary 1, we use the following set of scalar bandwidth matrices:

(11)

where and are the DPI estimators in Duong and Hazelton (2003) of the bivariate diagonal bandwidth matrix, , calculated by the dictionary data , . To obtain and , we employ Hpi.diag function in ’ks’ library in R. The bandwidth that should be used for is larger in size than , which is calculated by the number of data points, because the resulting estimator entails the convex combination of not more than kernel functions. In this sense, each word in is augmented by multiplying by the factor .

In dictionary 2, we consider the following set of scalar bandwidth matrices:

(12)

where

is the standard deviation of

, and . Parameter is a tuning parameter, determined according to the sample size and/or the curvature of the true functions. We normally set , but we set in estimating Type J. If we assume parameter to be an increasing function of the sample size, then in (12

) is similar to the geometric mean of

, which is Scott’s rule in (Scott 2017, p.164).

5.2 Simulation

We consider simulations 1 and 2 for the dictionaries and respectively. In each simulation case, we examine the behaviors of the proposed density estimator in terms of Mean Integrated Squared Error (MISE) when the proportion of the dictionary data points in the total sample size changes. We design the following five simulation cases for that purpose:

  1. , .

  2. , ; however , .

  3. , .

  4. , .

  5. , .

Cases (a), (b), and (c) examine the impact of the ratio to the behaviors of the proposed density estimators. Cases (d) and (e) are designed for comparison. In case (d), half of the original i.i.d. sample , , is discarded and the remainder , , is used for both the dictionary data and the algorithm data . In the case (e), the original i.i.d. sample , , is used in common for the dictionary and the algorithm.

In each simulation case, we use the bivariate simulation settings of Wand and Jones (1993), Type C, J and L, whose contour plots are shown in Figure 1. For each simulation setting, we generate a sample of size ; we retain one part of it for the dictionary and use the remainder for calculating empirical loss, and run the algorithm. We repeat this process 10 times and obtain MISE by averaging the ISEs calculated for each process. We consider three alternatives to our estimator, KDE1 and KDE2 with Duong and Hazelton’s (2003) DPI full bandwidth matrix and DPI diagonal bandwidth matrix, respectively, as well as RSDE. For the divergence function, we employ the power divergence function in (5) and set the tuning parameter to be and . We denote our estimators minimizing the -power divergence with and to be and , respectively. For the parameter in the mixing coefficient in (7), we set , following Klemelä (2007). The total iterations of the algorithm are .

Figure 1: True density functions: Left=Type C. Center=Type J. Right=Type L.

5.2.1 Simulation 1

We present the numerical results of and respectively in Tables 1 and 2, respectively, in terms of MISE. The visual presentation of the results for in Type L and for in Types C, J and L is given in Figure 2. We visually present the results of (b) in simulation 1 for Type C in Figure 3. In the figure, the two upper panels represent the plot of MISEs for every . The middle and bottom panels of each figure are the contour plots of the estimators. The red points in the contour plots are the data points used for the dictionary, while the blue ones are those chosen for estimation by the algorithm. The number of blue points is less than because the algorithm chooses the same data points more than once.

We see the following findings of in terms of MISE. In the case of Type C, we observe the cases (a) and (e) for and can outperform KDE1, KDE2, and RSDE; the cases (b) and (d) for can do those (see Table 2 and the two panels in the second column of Figure 2). This result is important in that our estimator can be superior to the three alternatives with the help of DPI bandwidth matrix estimator. In the case of Type J, we observe the case (e) for can outperform RSDE; the case (e) for can do KDE2 and RSDE (see Table 2 and the two panels in the third column of Figure 2). In the case of Type L, we observe the case (e) for and the cases (b) and (d) for can outperform RSDE; the case (e) for can do KDE2 and RSDE (see Table 2 and the rightmost two panels in Figure 2). The reason why Type C performs better than Type J and L is that the true function of Type C is a symmetric and is compatible with a scalar bandwidth matrix. We also observe a general trend that the case (e) performs better than (a)-(d) in terms of MISE except for in the case of Type L (see Table 1 and the leftmost two panels in Figure 2).

We compare our estimators with RSDE by the degree of data condensation. In Table 3, we show the data condensation ratios of our estimators and RSDE. In the columns of RSDE and (I), we show the ratios of the actual data points used for estimating the density function in the number of total data points . In columns (II), we show the ratios of the actual number of words in used for the estimations in the number of total words . We observe four results. First, our method yields lower data condensation ratios in terms of (I) and (II) than RSDE in all situations. Second, we observe that the case (a) yields the smallest data condensation ratios (I) and (II) in all situations. Third, the ratio (II) decreases as increases. Fourth, the ratios (I) and (II) in the case of are greater than those of in each simulation setting. The case of uses less data points and words for estimation than that of .

1 25 50 75 100 KDE1 KDE2 RSDE
Type C
84(26) 81(28) 106(31)
267(333) 89(61) 85(59) 84(57) 85(56)

318(598) 106(122) 106(132) 101(118) 105(129)
278(387) 132(119) 141(145) 147(146) 146(153)
274(550) 100(134) 111(160) 108(155) 112(150)
115(78) 57(35) 57(35) 57(37) 58(39)
53(14) 54(11) 84(18)
272(501) 72(76) 73(85) 73(82) 75(88)
177(203) 76(73) 71(61) 76(70) 76(70)
247(322) 145(159) 142(151) 142(148) 140(153)
176(207) 69(48) 72(56) 75(60) 75(60)
151(129) 60(47) 61(42) 60(41) 58(40)
Type J
108(17) 118(31) 138(33)
1900(3402) 404(327) 415(388) 409(343) 399(340)
1096(594) 301(73) 302(90) 298(86) 291(75)
1146(530) 377(132) 376(142) 356(127) 354(126)
1093(587) 335(49) 350(50) 336(44) 342(54)
1150(327) 293(43) 274(46) 272(37) 271(34)
74(10) 80(19) 111(19)
983(611) 273(32) 271(31) 269(27) 268(28)
1213(520) 311(52) 297(60) 302(54) 296(54)
1266(481) 322(80) 308(78) 314(71) 314(75)
1231(555) 284(56) 278(54) 277(53) 277(51)
1309(342) 267(34) 265(29) 264(32) 260(28)
Type L
67(14) 77(14) 131(87)
574(223) 185(44) 183(42) 180(41) 177(40)
748(324) 199(92) 183(76) 190(85) 183(71)
1157(876) 314(248) 319(232) 332(254) 329(250)
777(352) 226(56) 235(78) 225(72) 225(69)
889(321) 191(67) 177(51) 180(56) 180(61)
45(6) 54(18) 98(26)
630(237) 171(23) 160(27) 158(26) 158(25)
968(349) 185(33) 173(21) 180(22) 178(22)
1022(420) 232(92) 214(74) 213(66) 214(76)
933(335) 170(56) 179(80) 181(67) 182(79)
1062(321) 173(39) 164(37) 165(32) 165(33)
Table 1: Simulation 1: Result of MISE (standard deviation ). ()
1 25 50 75 100 KDE1 KDE2 RSDE
Type C
84(26) 81(28) 106(31)
270(337) 56(25) 55(24) 56(27) 59(27)
341(590) 79(40) 84(41) 85(44) 86(47)
291(436) 130(94) 125(84) 120(81) 118(76)
297(603) 79(91) 79(83) 81(88) 85(91)
119(85) 42(22) 43(25) 46(29) 46(29)
53(14) 54(11) 84(18)
297(552) 38(21) 34(19) 34(15) 33(14)
173(204) 48(28) 50(30) 49(30) 50(32)
265(321) 84(47) 84(48) 87(46) 89(49)
174(209) 48(27) 46(30) 47(26) 50(31)
156(139) 24(9) 24(9) 26(12) 26(10)
Type J
108(17) 118(30) 138(33)
1932(3390) 209(73) 179(43) 178(43) 180(44)
1146(567) 196(35) 190(41) 187(39) 179(40)
1210(546) 249(68) 241(70) 241(72) 242(66)
1130(596) 219(56) 205(52) 206(49) 202(50)
1160(357) 139(24) 122(30) 126(30) 124(29)
74(10) 80(19) 111(19)
1004(606) 163(44) 145(50) 140(50) 140(53)
1248(514) 146(36) 135(30) 131(33) 130(32)
1297(455) 158(28) 152(42) 146(35) 152(36)
1219(548) 136(29) 119(31) 124(33) 117(32)
1302(324) 100(13) 84(16) 83(13) 79(15)
Type L
67(14) 77(14) 131(87)
574(190) 111(24) 108(28) 106(31) 104(30)
740(279) 118(25) 113(23) 113(24) 111(25)
1177(861) 193(72) 186(64) 180(53) 179(57)
760(273) 138(25) 130(33) 126(31) 129(30)
851(281) 92(17) 86(19) 83(17) 83(17)
45(6) 54(18) 98(26)
610(208) 92(17) 84(20) 82(21) 80(20)
911(340) 81(17) 81(26) 78(24) 75(21)
971(424) 119(26) 107(24) 107(22) 105(20)
919(336) 84(23) 77(21) 74(20) 75(19)
988(315) 66(10) 51(8) 47(7) 48(8)
Table 2: Simulation 1: Result of MISE (standard deviation ). ()
Figure 2: Simulation 1. Plot of MISEs at each stage of algorithm for different .
Figure 3: Simulation 1: Upper: Plots of MISE vs .   Middle: Contour plots of .   Bottom: Contour plots of
Type RSDE
(I) (II) (I) (II)
C 1920 (219)
(a) 495 (155) 484 (135) 830 (368) 984 (429)
(b) 565 (180) 284 ( 64) 905 (370) 500 (213)
(c) 660 (185) 190 ( 49) 890 (321) 291 ( 98)
1640 (104)
(a) 268 ( 81) 254 ( 65) 503 (348) 522 (342)
(b) 350 ( 68) 164 ( 33) 720 (328) 358 (135)
(c) 558 (338) 161 ( 85) 733 (329) 242 (113)
J 1695 (206)
(a) 430 ( 79) 460 (115) 760 (284) 828 (302)
(b) 545 (172) 272 (104) 860 (360) 458 (211)
(c) 535 (155) 171 ( 43) 780 (307) 264 (110)
1470 (220)
(a) 215 ( 39) 230 ( 58) 455 (118) 516 (194)
(b) 305 (107) 146 ( 42) 628 (163) 317 ( 86)
(c) 308 ( 55) 141 ( 26) 738 (202) 232 ( 73)
L 1925 (134)
(a) 405 ( 64) 384 ( 54) 600 (139) 776 (280)
(b) 475 (125) 202 ( 50) 795 (215) 476 (173)
(c) 580 (149) 168 ( 41) 880 (261) 329 (109)
1730 (250)
(a) 270 ( 55) 230 ( 45) 445 (158) 518 (230)
(b) 350 (131) 144 ( 51) 685 (230) 364 (126)
(c) 350 ( 91)  97 ( 25) 693 (203) 219 ( 70)
Table 3: Simulation 1: Data condensation ratios (standard deviation ). The RSDE column contains the data condensation ratios in the case of RSDE. The numbers in column (I) are the actual number of data points chosen by the algorithm divided by . The numbers in the column (II) are the actual number of words in chosen by the algorithm divided by .

5.2.2 Simulation 2

We present the numerical results of and in Tables 4 and 5, respectively. The visual presentation of the tables is given in Figure 4 for Type C. We observe two general features from the results in Tables 4 and 5. One is that (c) (d) (a) in terms of MISE. (Compare each number in Table 4 with its counterpart in Table 5. In the case of Type C of for , observe the two line graphs of (a) and (d) in the upper right panel of Figure 4). This indicates that (d) lies between the best and the worst cases of the proposed estimators in terms of MISE. The second is that (c) (b) (a) in terms of MISE. (Compare each number in Table 4 with its counterpart in Table 5. See also the line graphs (a), (b), and (c) in each panel of Figure 4 to cite an example of Type C.) This indicates that MISE can be improved as the percentage of the dictionary data points in the sample size decreases. However, it should be noted that the improvement of MISE caused by a decrease in the ratio of occurs as far as the number of the dictionary data points is not too small, because the algorithm can no longer be executable in such a situation. Our experiments suggest that MISE can deteriorate when is lesser than . The reason why the two features, (c) (d) (a) and (c) (b) (a) in terms of MISE, are not observed in simulation 1 is that each word in (11) has larger inter-sample variance than that in (12). (Compare each SD of MISE in Tables 1 and 2 with its counterpart in Tables 4 and 5, respectively.)

In the same manner of Figure 3, we visually present the results of (b) in simulation 2 for Types C, J, and L in Figures 5,  6, and 7, respectively. We find and outperform KDE1, KDE2, and RSDE in terms of MISE as increases in Type C. (See the upper two panels of Figure 5.) In comparison with simulation 1, simulation 2 yields the smaller MISE for Types C and L. (Compare each number in Tables 1 and 2 with its counterpart in Tables 4 and 5, respectively.) Observing the contour plots in the same figures, we find in Types J and L that captures the shape of the true contour plot rather better than . (See the middle and bottom panels of Figures 6 and 7.) We consider this difference could be the result of the robustness property of the -power divergence function in the case of .

We describe the data points chosen by our method for estimation from the dictionary. From the contour plots of in the lower two panels of Figure 6, we find that our algorithm generally chooses data points along with the mountain ridges of the contour plots. This tendency is also observed in RSDE (see Girolami and He 2003, p.1256).

1 25 50 75 100 KDE1 KDE2 RSDE
Type C
84(26) 81(28) 106(31)
89 (37) 47 (19) 45 (18) 44 (18) 42 (17)
114 (56) 58 (22) 54 (24) 52 (23) 52 (24)
108 (55) 70 (45) 63 (39) 67 (41) 64 (37)
90 (33) 57 (26) 53 (31) 52 (29) 53 (30)
80 (19) 43 (14) 39 (15) 38 (15) 37 (16)
53(14) 54(11) 84(18)
82 (26) 40 (13) 38 (11) 39 (13) 38 (12)
79 (23) 46 (19) 43 (19) 42 (20) 41 (19)
95 (41) 63 (29) 62 (33) 62 (35) 62 (34)
80 (19) 43 (14) 39 (15) 38 (15) 37 (16)
65 (9) 38 (12) 35 (12) 34 (12) 34 (12)
Type J
108(17) 118(30) 138(33)
322 (14) 299 (12) 300 (16) 299 (17) 299 (17)
333 (35) 310 (25) 311 (34) 311 (34) 310 (33)
353 (74) 343 (69) 339 (62) 342 (65) 341 (61)
341 (38) 318 (26) 319 (26) 321 (28) 320 (28)
308 (11) 301 (15) 297 (16) 297 (18) 296 (17)
74(10) 80(19) 111(19)
311 (14) 298 (14) 295 (17) 294 (17) 293 (17)
312 (12) 299 (16) 296 (15) 295 (17) 294 (16)
315 (25) 310 (31) 306 (27) 307 (27) 307 (27)
308 (11) 301 (15) 297 (16) 297 (18) 296 (17)
304 (11) 294 (12) 291 (12) 289 (12) 288 (13)
Type L
67(14) 77(14) 131(87)
968 (111) 165 (53) 175 (47) 174 (50) 174 (52)
1247 (115) 237 (89) 225 (59) 229 (70) 225 (66)
1452 (99) 397 (211) 398 (183) 400 (193) 412 (198)
1218 (98) 233 (70) 242 (67) 259 (82) 245 (68)
1567 (112) 244 (83) 243 (79) 236 (82) 237 (86)
45(6) 54(18) 98(26)
1212 (106) 180 (34) 188 (45) 178 (32) 182 (29)
1565 (111) 232 (53) 247 (71) 240 (75) 236 (75)
1857 (139) 350 (84) 345 (91) 342 (92) 343 (89)
1567 (112) 244 (83) 243 (79) 236 (82) 237 (86)
2067 (85) 256 (40) 270 (55) 266 (57) 260 (60)
Table 4: Simulation 2: Result of MISE (standard deviation ). ()
1 25 50 75 100 KDE1 KDE2 RSDE
Type C
84(26) 81(28) 106(31)
92 (31) 43 (12) 42 (13) 42 (13) 42 (13)
125 (58) 66 (26) 66 (22) 64 (21) 64 (22)
120 (53) 68 (40) 68 (35) 72 (34) 71 (37)
101 (33) 55 (26) 56 (29) 55 (29) 55 (28)
73 (12) 35 (15) 33 (15) 34 (15) 34 (14)
53(14) 54(11) 84(18)
88 (27) 32 (13) 27 (10) 28 (9) 29 (10)
78 (27) 35 (18) 30 (12) 30 (14) 30 (13)
104 (40) 56 (32) 58 (31) 57 (31) 55 (30)
73 (12) 35 (15) 33 (15) 34 (15) 34 (14)
66 (18) 25 (10) 23 (9) 23 (8) 23 (8)
Type J
108(17) 118(31) 138(33)
331 (39) 259 (22) 255 (20) 2541 (20) 254 (22)
331 (37) 266 (18) 266 (28) 262 (19) 264 (22)
381 (1431) 302 (22) 305 (30) 304 (33) 305 (33)
349 (60) 280 (27) 270 (28) 265 (27) 267 (29)
307 (12) 2508 (23) 244 (22) 242 (23) 242 (25)
74(10) 80(19) 111(19)
322 (21) 242 (15) 233 (14) 233 (17) 231 (17)
317 (18) 250 (20) 242 (19) 241 (21) 241 (20)
328 (49) 260 (22) 252 (21) 254 (21) 252 (20)
309 (12) 251 (23) 244 (22) 242 (24) 242 (25)
303 (11) 231 (8) 222 (9) 221 (10) 220 (10)
Type L
67(14) 77(14) 131(87)
925 (134) 91 (18) 82 (28) 82 (24) 81 (22)
1210 (105) 129 (29) 118 (30) 118 (30) 116 (30)
1508 (194) 208 (86) 220 (91) 216 (87) 212 (82)
1172 (80) 139 (39) 136 (42) 124 (35) 127 (36)
1558 (115) 94 (23) 90 (25) 93 (25) 94 (23)
45(6) 54(18) 98(26)
1144 (51) 77 (20) 61 (16) 54 (14) 54 (12)
1536 (126) 91 (32) 89 (26) 83 (22) 85 (26)
1823 (109) 161 (55) 150 (30) 157 (32) 158 (29)
1558 (115) 94 (23) 90 (25) 93 (25) 94 (23)
2047 (72) 59 (13) 49 (12) 56 (12) 57 (15)
Table 5: Simulation 2: Result of MISE (standard deviation ).()
Figure 4: Simulation 2: Plot of MISEs at each stage of the algorithm in the case of Type C for different .
Figure 5: Simulation 2: Upper: Plots of MISE vs .   Middle: Contour plots of .   Bottom: Contour plots of
Figure 6: Simulation 2: Upper: Plots of MISE vs .   Middle: Contour plots ().   Bottom: Contour plots ()
Figure 7: Simulation 2: Upper: