A fast and recursive algorithm for clustering large datasets with k-medians

01/21/2011 ∙ by Hervé Cardot, et al. ∙ 0

Clustering with fast algorithms large samples of high dimensional data is an important challenge in computational statistics. Borrowing ideas from MacQueen (1967) who introduced a sequential version of the k-means algorithm, a new class of recursive stochastic gradient algorithms designed for the k-medians loss criterion is proposed. By their recursive nature, these algorithms are very fast and are well adapted to deal with large samples of data that are allowed to arrive sequentially. It is proved that the stochastic gradient algorithm converges almost surely to the set of stationary points of the underlying loss criterion. A particular attention is paid to the averaged versions, which are known to have better performances, and a data-driven procedure that allows automatic selection of the value of the descent step is proposed. The performance of the averaged sequential estimator is compared on a simulation study, both in terms of computation speed and accuracy of the estimations, with more classical partitioning techniques such as k-means, trimmed k-means and PAM (partitioning around medoids). Finally, this new online clustering technique is illustrated on determining television audience profiles with a sample of more than 5000 individual television audiences measured every minute over a period of 24 hours.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Clustering with fast algorithms large samples of high dimensional data is an important challenge in computational statistics and machine learning, with applications in various domains such as image analysis, biology or computer vision. There is a vast literature on clustering techniques and recent discussions and reviews may be found in

Jain et al. (1999) or Gan et al. (2007). Moreover, as argued in Bottou (2010), the development of fast algorithms is even more crucial when the computation time is limited and the sample is potentially very large, since fast procedures will be able to deal with a larger number of observations and will finally provide better estimates than slower ones.

We focus here on partitioning techniques which are able to deal with large samples of data, assuming the number

of clusters is fixed in advance. The most popular clustering methods are probably the non sequential (

Forgy (1965)) and the sequential (MacQueen (1967)) versions of the -means algorithms. They are very fast and only require operations, where is the sample size. They aim at finding local minima of a quadratic criterion and the cluster centers are given by the barycenters of the elements belonging to each cluster. A major drawback of the

-means algorithms is that they are based on mean values and, consequently, are very sensitive to outliers. Such atypical values, which may not be uncommon in large samples, can deteriorate significantly the performances of these algorithms, even if they only represent a small fraction of the data as explained in

García-Escudero et al. (2010) or Croux et al. (2007). The -medians approach is a first attempt to get more robust clustering algorithms; it was suggested by MacQueen (1967) and developed by Kaufman and Rousseeuw (1990). It consists in considering criteria based on least norms instead of least squared norms, so that the cluster centers are the spatial medians, also called geometric or -medians (see Small (1990)), of the elements belonging to each cluster. Note that it has been proved in Laloë (2010) that under general assumptions, the minimum of the objective function is unique. Many algorithms have been proposed in the literature to find this minimum. The most popular one is certainly the PAM (partitioning around medoids) algorithm which has been developed by Kaufman and Rousseeuw (1990) in order to search for local minima among the elements of the sample. Its computation time is and as a consequence, it is not very well adapted for large sample sizes. Many strategies have been suggested in the literature to reduce the computation time of this algorithm. For example subsampling (see e.g the algorithm CLARA in Kaufman and Rousseeuw (1990) and the algorithm CLARANS in Ng and Han (2002)), local distances computation (Zhang and Couloigner (2005)) or the use of weighted distances during the iteration steps (Park and Jun (2008)), allow one to reduce significantly the computation time without deteriorating the accuracy of the estimated partition.

Trimmed -means (see García-Escudero et al. (2008, 2010) and references therein) is also a popular modification of the -means algorithm that is more robust (see García-Escudero and Godaliza (1999)) in the sense that it has a strictly positive breakdown point, which is not the case for the -medians. Note however that the breakdown point is a pessimistic indicator of robustness since it is based on the worst possible scenario. For a small fraction of outliers whose distance is moderate to the cluster centers, -medians remain still competitive compared to trimmed -means as seen in the simulation study. Furthermore, from a computational point of view, performing trimmed -means needs to sort the data and this step requires operations, in the worst cases, at each iteration so that its execution time can get large when one has to deal with large samples.

Borrowing ideas from MacQueen (1967) and Hartigan (1975) who have first proposed sequential clustering algorithms and Cardot et al. (2011) who have studied the properties of stochastic gradient algorithms that can give efficient recursive estimators of the geometric median in high dimensional spaces, we propose in this paper a recursive strategy that is able to estimate the cluster centers by minimizing a -medians type criterion. One of the main advantages of our approach, compared to previous ones, is that it can be computed in only operations so that it can deal with very large datasets and is more robust than the -means. Note also that by its recursive nature, another important feature is that it allows automatic update and does not need to store all the data. A key tuning parameter in our algorithm is the descent step value. We found empirically that reasonable values are given by the empirical loss function. We thus also consider an automatic two steps procedure in which one first runs the sequential version of the -means in order to approximate the value of the loss function and then run our stochastic -medians with an appropriate descent step.

The paper is organized as follows. We first fix notations and present our algorithm. In the third Section, we state the almost sure consistency of the stochastic gradient -medians to a stationary point of the underlying objective function. The proof heavily relies on Monnez (2006). In Section 4, we compare on simulations the performance of our technique with the sequential -means, the PAM algorithm and the trimmed -means when the data are contaminated by a small fraction of outliers. We note that applying averaging techniques (see Polyak and Juditsky (1992)) to our estimator, with a small number of different initializations points, is a very competitive approach even for moderate sample sizes with computation times that are much smaller. In Section 5, we illustrate our new clustering algorithm on a large sample, of about 5000 individuals, in order to determine profiles of television audience. A major difference with PAM is that our algorithm searches for a solution in all the space whereas PAM, and its refinements CLARA and CLARANS, only look for a solution among the elements of the sample. Consequently, approaches such as PAM are not adapted to deal with temporal data presented in Section 5 since the data mainly consist of 0 and 1 indicating that the television is switched on or switched off during each minute of the day. Proofs are gathered in the Appendix.

2 The stochastic gradient -medians algorithm

2.1 Context and definitions

Let be a probability space. Suppose we have a sequence of independent copies

of a random vector

taking values in The aim is to partition into a finite number of clusters . Each cluster is represented by its center, which is an element of denoted by . From a population point of view, the -means and -medians algorithms aim at finding local minima of the function mapping to and defined as follows, for with for all , ,

(1)

where is a real, positive, continuous and non-decreasing function and the norm in takes account of the dimension of the data, for , . The particular case leads to the classical -means algorithm, whereas leads to the k-medians.

Before presenting our new recursive algorithm, let us introduce now some notations and recall the recursive -means algorithm developed by MacQueen (1967). Let us denote by the indicator function,

which is equal to one when is the nearest point to among the set of points The -means recursive algorithm proposed by MacQueen (1967) starts with arbitrary groups, each containing only one point, Then, at each iteration, the cluster centers are updated as follows,

(2)

where for , and is just the number of elements allocated to cluster until iteration . For , let . This also means that is simply the barycenter of the elements allocated to cluster until iteration

The interesting point is that this recursive algorithm is very fast and can be seen as a Robbins-Monro procedure.

2.2 Stochastic gradient -medians algorithms

Assuming has an absolutely continuous distribution, we have

Then, the -medians approach relies on looking for minima, that may be local, of the function which can also be written as follows, for any such that when ,

(3)

In order to get an explicit Robbins-Monro algorithm representation, it remains to exhibit the gradient of . Let us write in integral form. Denoting by

the density of the random variable

we have,

For , it can be checked easily that

and since

the partial derivatives satisfy,

We define, for

(4)

We can now present our stochastic gradient -medians algorithm. Given a set of distinct initialization points in the set of cluster centers is updated at each iteration as follows. For and

with and

The steps also called gains, are supposed to be -measurable. We denote by the gradient of and define . Let be the diagonal matrix of size

each being repeated times. Then, the -medians algorithm can be written in a matrix way,

(6)

which is a classical stochastic gradient descent.

2.3 Tuning the stochastic gradient -medians and its averaged version

The behavior of algorithm (2.2) depends on the sequence of steps and the vector of initialization These two sets of tuning parameters play distinct roles and we mainly focus on the choice of the step values, noting that, as for the -means, the estimation results must be compared for different sets of initialization points in order to get a better estimation of the cluster centers. Assume we have a sample of realizations of and a set of initialization points of the algorithm, the selected estimate of the cluster centers is the one minimizing the following empirical risk,

(7)

Let us denote by the number of updating steps for cluster until iteration , for A classical form of the descent steps can be given by

(8)

where and control the gain.

Adopting an asymptotic point of view, one could believe that should be set to with suitable constants and , which are unknown in practice, in order to attain the optimal parametric rates of convergence of Robbins Monro algorithms (see e.g. Duflo (1997), Th. 2.2.12). Our experimental results on simulated data have shown that the convergence of algorithm (2.2) with descent steps defined in (8) is then very sensitive to the values of the parameters and which have to be chosen very carefully. A simulation study performed in the particular case by Cardot et al. (2010) showed that the direct approach could lead to inaccurate results and is nearly always less effective than the averaged algorithm presented below, even for well chosen descent step values. From an asymptotic point of view, it has been proved in Cardot et al. (2011) that the averaged stochastic gradient estimator of the geometric median, corresponding to is asymptotically efficient under classical assumptions. Intuitively, when the algorithm is not too far from the solution, averaging allows to decrease substantially the variability of the initial algorithm which oscillates around the true solution and thus improves greatly its performances.

Consequently, we prefer to introduce an averaging step (see for instance Polyak and Juditsky (1992) or Pelletier (2000)), which does not slow down the algorithm and provides an estimator which is much more effective. Our averaged estimator of the cluster centers, which remains recursive, is defined as follows, for , , and for the value obtained by combining (2.2) and (8),

(9)

with starting points Then standard choices (see e.g. Bottou (2010) and references therein) for and are and so that one only needs to select values for

3 Almost sure convergence of the algorithm

3.1 A convergence theorem

The following theorem is the main theoretical result of this paper. It states that the recursive algorithm defined in (6) converges almost surely to the set of stationary points of the objective function defined in (3), under the following assumptions.

  • a) The random vector is absolutely continuous with respect to Lebesgue measure.
    b) is bounded: : a.s.
    c) : such that , .

  • a) , .
    b) a.s.
    c) a.s.
    d)

Theorem 1.

Assume that is absolutely continuous and that , for . Then under Assumptions (H1a,c), (H2a,b), (H3) or (H3’), and

converge almost surely.
Moreover, if the hypotheses (H1b) and (H2c,d) are also fulfilled then and the distance between and the set of stationary points of converge almost surely to zero.

A direct consequence of Theorem 1 is that if the set of stationary points of is finite, then the sequence necessarily converges almost surely towards one of these stationary points because converges almost surely towards zero. By Cesaro means arguments, the averaged sequence also converges almost surely towards the same stationary point.

3.2 Comments on the hypotheses

Note first that if the data do not arrive online and is chosen randomly among the sample units then is absolutely continuous and , for under (H1a,b). Moreover, the absolute continuity of is a technical assumption that is required to get decomposition (3) of the error. Proving the convergence in the presence of atoms would require to decompose this error in order to take into account the points which could have a non-null probability to be at the same distance. Such a study is clearly beyond the scope of the paper. Note however that in the simple case it has been established in Cardot et al. (2011) that the stochastic algorithm for the functional median is convergent provided that the distribution, which can be a mixture of a continuous and a discrete distribution, does not charge the median.

Hypothesis (H1c) is a stronger version of a more classical hypothesis needed to get consistent estimators of the spatial median (see Chaudhuri (1996)). As noted in Cardot et al. (2011), it is closely related to small ball properties of and is fulfilled when

for a constant that does not depend on and small enough. This implies in particular that hypothesis (H1c) can be satisfied only when the dimension of the data satisfies .

Hypotheses (H2) and (H3) or (H3’) deal with the stepsizes. Considering the general form of gains given in (8), they are fulfilled when the sizes of all the clusters grow to infinity at the same rate and .

4 A simulation study

We first perform a simulation study to compare our recursive -medians algorithm with the following well known clustering algorithms : recursive version of the -means (function kmeans in ), trimmed -means (function tkmeans in the  package tclust, with a trimming coefficient set to default, ) and PAM (function pam in the  package cluster). Our  codes are available on request.

Comparisons are first made according to the value of the empirical error (7) which must be as small as possible. We note that the results of our averaged recursive procedure defined by (2.2), (8) and (9) are very stable when the value of the tuning parameter is not too far from the minimum value of the error, with and This leads us to propose, in Section 4.2, an automatic clustering algorithm which consists in first approximating the error with a recursive -means and then performing our recursive -medians with the selected value of denoted by in the following. We have no mathematical justification for such an automatic choice of the tuning parameter but it always worked well on all our simulated experiments. This important point of our algorithm deserves further investigations that are beyond the scope of the paper. Note however that this intuitive approach will certainly be ineffective when the dispersion is very different from one group to another. It would then be possible to consider refinements of the previous algorithm which would consist in considering different values of tuning parameter for the different clusters. We only present here a few simulation experiments which highlight both the strengths and the drawbacks of our recursive -medians algorithm.

4.1 Simulation protocol

Simulation 1 : a simple experiment in

We first consider a very simple case and draw independent realizations of variable

(10)

which is a mixture, with weights of three bivariate random Gaussian vectors , and with mean vectors and and covariance matrices and
Point is an outlier and parameter controls the level of the contamination. A sample of realizations of is drawn in Figure 1.

Figure 1: Simulation 1. A sample of realizations of . An outlier is located at position (-14,14).

Simulation 2 : larger dimension with different correlation levels

Figure 2: Simulation 2. A sample of realizations of with . The mean values and of the three natural clusters are drawn in bold lines.

We also performed a simulation experiment, with a mixture of three Gaussian random variables as in (10), but in higher dimension spaces with correlation levels that vary from one cluster to another. Now, and

are independent multivariate normal distributions in

with means and for The covariance functions for and are controlled by a correlation parameter with , and Note that this covariance structure corresponds to autoregressive processes of order one with autocorrelation As before, plays the role of an outlying point. A sample of independent realizations of without outliers, is drawn in Figure 2 for a dimension

4.2 error and sensitivity to parameter

As noted in Bryant and Williamson (1978), comparing directly the distance of the estimates from the cluster centers and may not be appropriate to evaluate a clustering method. Our comparison is thus first made in terms of the value of the empirical error (7) which should be as small as possible. For all methods, we considered that there were clusters.

Figure 3: Simulation 1 with and Mean empirical error (over 50 replications) for the PAM algorithm (dashed line), the -means () and the stochastic -medians (bold line), for

We first study the simple case of Simulation 1. The empirical mean error of the PAM algorithm, the -means and the averaged -medians, for 50 replications of samples with sizes and a contamination level is presented in Figure 3. The number of initialization points equals 10 for both the -means and the -medians. When the descent parameter equals 0, the initialization point is given by the estimated centers by the -means, so that the empirical error corresponds in that case to the -means error, which is sightly above 2.31. We first note that this error is always larger, even if the contamination level is small, than the PAM and the -medians errors, for Secondly, it appears that for the -medians error is nearly constant and is clearly smaller than the error of the PAM algorithm. This means that, even if the sample size is relatively small (), the recursive -medians can perform well for values of which are of the same order of the error.

Figure 4: Simulation 2 with and The mean empirical error (over 50 replications) is represented for the PAM algorithm (dashed line), the MacQueen version of the -means () and the recursive -medians estimator (bold line), for

We now consider 50 replications of samples drawn from the distribution described in simulation 2, with and The number of initialization points for the -means and the -medians is now equal to 25 and the empirical mean error is presented in Figure 4. We first note that the performances of the PAM algorithm clearly decrease with the dimension. The -means performs better even if there are 5% of outliers and if it is not designed to minimize an error criterion. This can be explained by the fact that PAM, as well as CLARA and CLARANS, look for a solution among the elements of the sample. Thus these approaches can hardly explore all the dimensions of the data when is large and is not large enough. On the other hand, the -medians and the -means look for a solution in all and are not restricted to the observed data and thus provide better results in terms of error. As before, we can also remark that the minimum error, which is around 1.36, is attained for in the interval

Figure 5: Simulation 2 with and multiplied by a factor 10. The mean loss function (over 50 replications) is represented for the PAM algorithm (dashed line), the MacQueen version of the -means () and our recursive -medians estimator (bold line), for

We finally present results from Simulation 2 in which we consider samples with size of variable with The contamination level is and 50 initialization points were considered for the -means and -medians algorithms. Since has been multiplied by a factor 10, the minimum of the error is now around 13.6. We remark, as before, that because of the dimension of the data, PAM is outperformed by the -means and the -medians even in the presence of a small fraction of outliers (). The minimum of the error for the -medians estimator is again very stable for with smaller values than the error of the -means clustering.

As a first conclusion, it appears that for large dimensions the -medians can give results which are much better than PAM in terms of empirical error. We can also note that the averaged recursive -medians is not very sensitive to the choice of parameter provided its value is not too far from the minimum value of the error. Thus we only consider, in the following subsection, the data-driven version of our averaged algorithm described in Section 2.3 in which the value of is chosen automatically, its value being the empirical error of the recursive -means. This data-driven -medians algorithm can be summarized as follows

  1. Run the -means algorithm and get the estimated centers.

  2. Set as the value of the error of the -means, evaluated with formula (7).

  3. Run the averaged -medians defined by (2.2), (8) and (9), with computed in Step 2 and

4.3 Classification Error Rate

We now make comparisons in terms of classification error measured by the Classification Error Rate (CER) introduced by Chipman and Tibshirani (2005) and defined as follows. For a given partition of the sample, let be an indicator for whether partition places observations and in the same group. Consider a partition with the true class labels, the CER for partition is defined as

CER (11)

The CER equals 0 if the partitions and agree perfectly whereas a high value indicates disagreement. Since PAM, the -means and our algorithm are not designed to detect outliers automatically, we only evaluate the CER on the non-outlying pairs of elements and of the sample.

Figure 6: Simulation 1 with and On the left, the empirical error. On the right, CER for the -means, PAM, the data-driven recursive -medians algorithm (kmed) and the trimmed -means (tkm).

We present in Figure 6, results for 500 replications of Simulation 1, with a sample size and no outliers (). We note, in this small dimension context with no contamination, that the errors are comparable. Nevertheless, in terms of CER, the PAM, the -means and the data-driven -medians algorithms have approximately the same performances. For the trimmed

-means, the results are not as effective, since this algorithm automatically classifies 5% of the elements of the sample as outliers.

Figure 7: Simulation 1 with and On the left, the empirical error. On the right, CER for the -means, PAM, the data-driven recursive -medians algorithm (kmed) and the trimmed -means (tkm).

We then consider the same experiment as before, the only difference being that there is now a fraction of of outliers. The results are presented in Figure 7. The -means algorithm is clearly affected by the presence of outliers and both its error and its CER are now much larger than for the other algorithms. PAM and the recursive -medians have similar performances, even if PAM is slightly better. The trimmed -means now detects the outliers and also has good performances. If the contamination level increases to , as presented in Figure 8, then PAM and the trimmed -means (with a trimming coefficient ) are strongly affected in terms of CER and do not recover the true groups. The -medians algorithm is less affected by this larger level of contamination. Its median CER is nearly unchanged, meaning that for at least 50 % of the samples, it gives a correct partition.

Figure 8: Simulation 1 with and On the left, the empirical error. On the right, CER for the -means, PAM, the data-driven recursive -medians algorithm (kmed) and the trimmed -means (tkm).

We now consider Simulation 2, with a dimension and a fraction of outliers. The empirical errors and the CER, for sample sizes are given in Figure 9. It clearly appears that PAM has the largest errors and the trimmed -means and the data-driven -medians the smallest ones. Intermediate errors are obtained for the -means. In terms of CER, the partitions obtained by the -means and PAM are not effective and do not recover well the true partition in the majority of the samples. The trimmed -means and our algorithm always perform well and have similar low values in terms of CER.

Figure 9: Simulation 2 with and On the left, the empirical error. On the right, CER for the -means, PAM, the data-driven recursive -medians algorithm (kmed) and the trimmed -means (tkm).

4.4 Computation time

The  codes of all the considered estimation procedures call C routines and provide the same output. Mean computation times, for 100 runs, various sample sizes and numbers of clusters are reported in Table 1. They are based on one initialization point. From a computational point of view, the recursive -means based on the MacQueen algorithm as well as the averaged stochastic -medians algorithm are always faster than the other algorithms and the gain increases as the sample size gets larger. For example, when and the stochastic -medians is approximately 30 times faster than the trimmed -means and 350 times faster than the PAM algorithm. The data-driven recursive -medians has a computation time of approximately the sum of the computation time of the recursive -means and the stochastic -medians. This also means that when the allocated computation time is fixed and the dataset is very large, the data-driven recursive -medians can deal with sample sizes that are 15 times larger than the trimmed -means and 175 times larger than the PAM algorithm.

n=250 n=500 n=2000
Estimator k=2 k=4 k=5 k=2 k=4 k=5 k=2 k=4 k=5
-medians 0.33 0.35 0.36 0.45 0.47 0.48 1.14 1.25 1.68
PAM 1.38 3.34 4.21 5.46 15.12 20.90 111 302.00 596.00
Trimmed -means 2.20 5.45 6.76 5.32 11.19 13.48 22.97 42.72 51.00
MacQueen 0.21 0.29 0.31 0.25 0.43 0.50 0.60 1.38 1.76
Table 1: Comparison of the mean computation time in seconds, for 100 runs, of the different estimators for various sample sizes and number of clusters The computation time are given for one initialization point.

When the sample size or the dimension increases, the computation time is even more critical. For instance, when and as in the example of Section 5, our data-driven recursive -medians estimation procedure is at least 500 times faster than the trimmed -means. It takes 5.5 seconds for the sequential -means to converge and then about 3.0 seconds for the averaged -medians, whereas it takes more than 5700 seconds for the trimmed -means.

5 Determining television audience profiles with -medians

The Médiamétrie company provides every day the official estimations of television audience in France. Television consumption can be measured both in terms of how long people watch each channel and when they watch television. Médiamétrie has a panel of about 9000 individuals equipped at home with sensors that are able to record and send the audience of the different television channels. Among this panel, a sample of around 7000 people is drawn every day and the television consumption of the people belonging to this sample is sent to Médiamétrie at night, between 3 and 5 am. Online clustering techniques are then interesting to determine automatically, the number of clusters being fixed in advance, the main profiles of viewers and then relate these profiles to socio-economic variables. In these samples, Médiamétrie has noted the presence of some atypical behaviors so that robust techniques may be helpful.

Figure 10: A sample of 5 observations of individual audience profiles measured every minute over a period of 24 hours.

We are interested in building profiles of the evolution along time of the total audience for people who watched television at least one minute on the 6th September 2010. About 1600 people, among the initial sample whose size is around 7000, did not watch television at all this day, so that we finally get a sample of individual audiences, aggregated along all television channels and measured every minute over a period of 24 hours. An observation is a vector belonging to with each component giving the fraction of time spent watching television during the corresponding minute of the day. A sample of 5 individual temporal profiles is drawn in Figure 10. Clustering techniques based on medoids and representative elements of the sample (e.g. PAM, CLARA and CLARANS) are not really interesting in this context since they will return centers of the form of the profiles drawn in Figure 10 which are, in great majority, constituted of 0 and 1 and are consequently difficult to interpret. Furthermore, the dimension being very large, these algorithms which do not consider all the dimensions of the data, as seen in the simulation study, will lead to a minimum value of the empirical error (7) that will be substantially larger than for the -means and our recursive -medians. Indeed, at the optimum, the value of the empirical error is 0.2455 for the -medians, 0.2471 for the -means and 0.2692 for PAM.

The cluster centers, estimated with our averaged algorithm for with a parameter value selected automatically, and 100 different starting points, are drawn in Figure 11. They have been ordered in decreasing order according to the sizes of the partitions and labelled Cl.1 to Cl.5. Cluster 1 (Cl.1) is thus the largest cluster and it contains about 35% of the elements of the sample. It corresponds to individuals that do not watch television much during the day, with a cumulative audience of about 42 minutes. At the opposite, cluster 5, which represents about 12% of the sample, is associated to high audience rates during nearly all the day with a cumulative audience of about 592 minutes. Clusters 2, 3 and 4 correspond to intermediate consumption levels and can be distinguished according to whether the audience occurs during the evening or at night. For example Cluster 4, which represents 16% of the sample, is related to people watching television late at night, with a cumulative audience of about 310 minutes.

Figure 11: Cluster centers for temporal television audience profiles measured every minute over a period of 24 hours.

Appendix : Proof of Theorem 1

The proof of Theorem 1 relies on the following light version of the main theorem in Monnez (2006), section 2.1. The proof of Theorem 1 consists in checking that all the conditions of the following theorem are satisfied.

Theorem 2 (Monnez (2006)).

Assuming

  • is a non negative function;

  • There exists a constant such that, for all ,

  • The sequence is almost surely bounded and is continuous almost everywhere on the compact set containing ;

  • There exists four sequences of random variables , , and in adapted to the sequence such that a.s.:

  • and ;

  • and ;

  • a.s., a.s. and

    then the distance of to the set of stationary points of converges almost surely to zero.

Proof of Theorem 1.

Let us now check that all the conditions in Theorem 2 are fulfilled in our context.

Step 1: proof of

Let and . Since is absolutely continuous with respect to Lebesgue measure, a.s. and one gets

and it comes

which yields

The application is a continuous function whose gradient

is also continuous for . Then almost surely for , there exists , , such that

Consequently for all ,

so that

On the one hand

and on the other hand

hence since

one gets, with (H1c)

Consequently, we have

Step 2: Proof of the assertion: , for all ,

Let us prove by induction on that for all , for all , . This inequality is trivial for the case : . Let such that , . Let . First we assume that . Then it comes

Now in the case when , one gets

and then

Since for , , it remains to deal with the unique index such that . In that case,

By (H1b) and from the inequalities and , we have,

which leads to and concludes the proof by induction.

Step 3: Proof of

From the integral form

it is easy to see that is a continuous function of .

Step 4: Proof of

The definition of implies that and hence .

Step 5: Proof of

Hence assuming (H3), one gets

In the case when (H3’) holds instead of (H3), one has

Consequently,

which concludes the proof.∎

Acknowledgements. We thank the anonymous referees for their valuable suggestions. We also thank the Médiamétrie company for allowing us to illustrate our sequential clustering technique with their data.

References

  • Bottou (2010) Bottou, L., 2010. Large-scale machine learning with stochastic gradient descent. In: Lechevallier, Y., Saporta, G. (Eds.), Compstat 2010. Physica Verlag, Springer., pp. 177–186.
  • Bryant and Williamson (1978) Bryant, P., Williamson, J. A., 1978. Asymptotic behaviour of classification maximum likelihood estimates. Biometrika 65, 273–281.
  • Cardot et al. (2010) Cardot, H., Cénac, P., Chaouch, M., 2010. Stochastic approximation to the multivariate and the functional median. In: Lechevallier, Y., Saporta, G. (Eds.), Compstat 2010. Physica Verlag, Springer., pp. 421–428.
  • Cardot et al. (2011) Cardot, H., Cénac, P., Zitt, P.-A., 2011. Efficient and fast estimation of the geometric median in Hilbert spaces with an averaged stochastic gradient approach. Bernoulli, to appear.
  • Chaudhuri (1996)

    Chaudhuri, P., 1996. On a geometric notion of quantiles for multivariate data. J. Amer. Statist. Assoc. 91 (434), 862–872.

  • Chipman and Tibshirani (2005)

    Chipman, H., Tibshirani, R., 2005. Hybrid hierarchical clustering with applications to microarray data. Biostatistics 7, 286–301.

  • Croux et al. (2007) Croux, C., Gallopoulos, E., Van Aelst, S., Zha, H., 2007. Machine learning and robust data mining. Computational Statistics and Data Analysis 52, 151–154.
  • Duflo (1997) Duflo, M., 1997. Random iterative models. Vol. 34 of Applications of Mathematics (New York). Springer-Verlag, Berlin.
  • Forgy (1965)

    Forgy, E., 1965. Cluster analysis of multivariate data: efficiency vs. interpretability of classifications. Biometrics 21, 768–769.

  • Gan et al. (2007) Gan, G., Ma, C., Wu, J., 2007. Data Clustering: Theory, Algorithms, and Applications. SIAM, Philadelphia.
  • García-Escudero and Godaliza (1999) García-Escudero, L., Godaliza, A., 1999. Robustness properties of -means and trimmed -means. Journal of the American Statistical Association 94, 956–969.
  • García-Escudero et al. (2008) García-Escudero, L., Godaliza, A., Matràn, C., Mayo-Iscar, A., 2008. A general trimming approach to cluster analysis. Annals of Statistics 36, 1324–1345.
  • García-Escudero et al. (2010) García-Escudero, L., Godaliza, A., Matràn, C., Mayo-Iscar, A., 2010. A review of robust clustering methods. Adv. Data Anal. Classif. 4, 89–109.
  • Hartigan (1975) Hartigan, J., 1975. Clustering algorithms. John Wiley & Sons, New York.
  • Jain et al. (1999) Jain, A., Marty, M., Flynn, P., 1999. Data clustering: a review. ACM Computing surveys 31, 264–323.
  • Kaufman and Rousseeuw (1990) Kaufman, L., Rousseeuw, P., 1990. Finding groups in data: an introduction to cluster analysis. John Wiley & Sons, New York.
  • Laloë (2010) Laloë, T., 2010. -quantization and clustering in Banach spaces. Mathematical Methods of Statistics 19, 136–150.
  • MacQueen (1967) MacQueen, J., 1967. Some methods for classification and analysis of multivariate observations. In: Proc. Fifth Berkeley Sympos. Math. Statist. and Probability (Berkeley, Calif., 1965/66). Univ. California Press, Berkeley, Calif., pp. Vol. I: Statistics, pp. 281–297.
  • Monnez (2006) Monnez, J.-M., 2006. Almost sure convergence of stochastic gradient processes with matrix step sizes. Statist. Probab. Lett. 76 (5), 531–536.
  • Ng and Han (2002) Ng, R. T., Han, J., 2002. Clarans: a method for clustering objects for spatial data mining. IEEE Transactions on Knowledge and Data Engineering 14, 1003–1016.
  • Park and Jun (2008) Park, H.-S., Jun, C.-H., 2008. A simple and fast algorithm for k-medoids clustering. Expert Systems with Applications 36, 3336–3341.
  • Pelletier (2000) Pelletier, M., 2000. Asymptotic almost sure efficiency of averaged stochastic algorithms. SIAM J. Control Optim. 39 (1), 49–72 (electronic).
  • Polyak and Juditsky (1992) Polyak, B., Juditsky, A., 1992. Acceleration of stochastic approximation. SIAM J. Control and Optimization 30, 838–855.
  • Small (1990) Small, C. G., 1990. A survey of multidimensional medians. International Statistical Review / Revue Internationale de Statistique 58 (3), 263–277.
  • Zhang and Couloigner (2005) Zhang, Q., Couloigner, A., 2005. A new and efficient k-medoid algorithm for spatial clustering. Lecture Notes in Computer Science 3482, 181–189.