is known. Then the classification rule which chooses the class having maximum a posteriori probability given the observed patternhas minimal risk of misclassification. This optimal rule is given by
where takes, in a measurable way, some value with . The minimal risk of misclassification, often termed the Bayes risk, is given by
Even though in many problems of pattern recognition the distribution of will not be known, the Bayes risk is a quantity of major importance as it provides the benchmark behaviour against which any other procedure is judged.
Let us briefly recall the i.i.d. model of supervised learning which has provided a main direction of research, see, e.g., the monograph. There, in addition to , we have a learning sequence of independent copies of , i.e. having the same distribution. This sequence is sampled independently of
and is used for learning proposes, in a statistical sense for the estimation of unknown distributions to construct the classification procedure.
In this paper we take a different approach which is motivated by an on-line classification setting which we model in the following way: There is given a stochastic process , the present time-point being denoted by 0, with observables in temporal order
from which the pattern is to be inferred. The time parameter belongs to some set of the form or, for mathematical purposes, to . So in this classification setting, previous observations may be used to classify the present observation. If is independent of the past then clearly previous observations carry no information on and our optimal classification would be given by .
But in a variety of classification problems we encounter dependence. Looking e.g. at the on-line classification of handwritten characters the dependence structure in natural language could be taken into account. In this situation, would be the current handwritten character to be classified, the unknown true character, the foregoing handwritten character would be and the unknown true character , and in general would be the -th one preceding with unknown . So, there is a well-known dependence between the ’s, described by linguists using Markov models (see, e.g.,  for early discussions), and this dependence is of course inherited by the ’s. A popular model for this situation is given by a hidden Markov model, which we shall also use in this paper.
Coming back to the general model, we prescribe to use the present and in addition the last preceding observables. Then a classification rule with memory takes the form
for some measurable . The optimal rule is given by
with Bayes risk
Assume that we have such a process with full past for our mathematical model. By martingale convergence it follows that for
Here it is important to point out that this paper centers around the behaviour of the optimal classification procedure in dependence on , the number of past observations used. This differs markedly from one of the main lines of research in the i.i.d. model of supervised learning where the focus is on the behaviour of classification procedures in dependence on , the size of the training sequence.
To investigate the behaviour of procedures which incorporate preceding information into classification rules we will use the setting of hidden Markov models. This class of models has been of considerable interest in the theory and applications of pattern recognition, see the monographs by  and by  from a more practical viewpoint. It provides a class which allows good modelling for various problems with dependence and still may be handled well from the analytical, the algorithmic, and the statistical point of view, see the monograph by . The applications range from biology to speech recognition to finance; the above monographs contain a wealth of such examples.
A theoretical contribution to pattern recognition for such models was given by  where the asymptotic risk of misclassification for nearest neighbor rules in dependent models including hidden Markov models was derived. Similar models were treated in  to obtain consistency for certain classes of procedures, i.e. convergence of the risk of misclassification to the Bayes risk. As consistency for classification follows from consistency in the corresponding regression problem, see e.g. [2, 6.7], any result on regression consistency yields a result on classification consistency, and a wealth of such results is available, e.g. under mixing conditions. All these results invoke the convergence of the size of a training sequence to infinity and do not cover the topic of this paper. Closer to our paper is the problem of predicting from for stationary and ergodic time series, see e.g. [5, Chapter 27]. Our treatment differs as we do not have knowledge (just guesses of) in on-line pattern recognition, only that of .
The hidden Markov model as it will be used in this paper takes the following form. We assume that for each we have, written in their temporal order, observables and unobservables
. The unobservables form a Markov chain. The observables are conditionally independent given the unobservables in the form of
for some stochastic kernel and are not Markovian in general. This stochastic kernel and the transition matrix of the chain are assumed to be the same for each . But we allow for the flexibility that, for each , a different initial distribution, i.e. distribution of may occur. Note that stands for the time point in the past where our model would be started and the distribution of would not be known.
For being completely precise we would have to use the notation since, due to our flexibility in initial distribution, the distribution of and need not coincide. Hence also does not hold in general where is computed in a model started at some time , and in a model with a possibly different initial distribution. But all our bounds will only involve the transition matrix and the stochastic kernel which do not depend on the index . So we shall omit this upper index in order not to overburden our notations.
We assume that the transition matrix of the chain is such that there exists a unique stationary probability distribution, characterized by the property that if has the distribution then all later have the same distribution . For our chain with full past we consider the stationary setting where each has the same distribution . Then of course and denoting the risk in the stationary case with an additional .
Without loss of generality we let the probability measures be given by densities with respect to some -finite measure on . So we have for all
This provides a unified treatment for the case of discrete where might be the counting measure, and for the case of Lebesgue densities where and might be -dimensional Lebesgue measure.
In Section 2 we shall show under a suitable assumption that exists and is independent of the particular sequence of initial distributions, hence . Furthermore this convergence is exponentially fast and we provide a bound for in this respect. Let us remark that, as we are looking backwards in time, the usual geometric ergodicity forward in time does not seem to yield an immediate proof. In Section 3 we introduce kernel classification rules with memory and discuss their theoretical and practical performance. Our findings indicate that it might be useful to include a small number of preceding observations, starting with , to increase the performance of classification rules with an acceptable increase in computational complexity. Various technical proofs are given in Section 4.
2 Exponential Convergence
We consider a hidden Markov model as described in the Introduction.
For this model we make the following assumption:
All entries , in the transition matrix are . All densities are on .
Furthermore, with denoting the number of classes, let
The following result provides the main technical tool. Its proof will be given in Section 4. We use the notation for and in the same manner we use .
Let . Consider a hidden Markov model which starts in some time point . Let and fix . Set
Then for all
where for and for .
In the following corollary the probabilities and are treated. The first will pertain to a hidden Markov model which starts in some time point , the second to one which starts in some time point . Note that terms of the form are identical in both models due to the identical transition matrix and the identical kernel .
Let , and . Then
using Theorem 2.2.
We now introduce the constants used for the exponential bound.
For the following result we need additional constants which arise from basic Markov process theory. A transition matrix is called uniformly ergodic if there exists a unique stationary probability distribution and there exist constants such that for any Markov chain with transition matrix and any initial distribution at time
in total variation norm . With the same meaning, also the process is called uniformly ergodic. Assumption above implies uniform ergodicity, so that we have for each Markov chain constants as above, see [10, Chapter 16] for a general treatment.
There exist constants such that for all
if and come from the same model started at some time point ,
in the general case that and come from possibly different models, the first started in some time point , the second in some time point .
The constants will be those introduced in Remark 2.4.
Let us firstly consider the case that and stem from the same model. Using the generic symbol to denote densities in this model we write the joint density as and the joint conditional density as . With this notation we have
We obtain from Corollary 2.3 and conditional independence
Let us now look at the general case with models 1 and 2, stemming from model 1, from model 2 respectively. Then
From the first part of the assertion
To treat and we note that the conditional Bayes risks for time lag given are the same in both models hence the unconditional risks differ by at most the total variation distance between the two distributions of in the two models. This quantity is since both models have been running for at least time points, hence
From this we easily obtain our main result.
There exist constants such that for all
in particular for
3 Kernel Classification With Memory
Optimal classification procedures provide benchmarks for the actual behaviour of data driven classification procedures which do not require knowledge of the underlying distribution. A general principle from statistical classification involves the availability of a training sequencewhere the have been recorded together with the . This training sequence is used for the construction of a regression estimator
which leads to the classification rule
When we choose a kernel
and use the common kernel regression estimate we arrive at the kernel classification rule
The asymptotic behaviour, as the size of the training sequence tends to infinity, has been thoroughly investigated for such classification rules and in particular for kernel classification rules. In the i.i.d. case or more generally under suitable mixing conditions, such procedures are risk consistent in the following sense: Kernel classification rules asymptotically achieve the minimal risk of misclassification for if the size of the training sequence tends to and satisfies and . As remarked in the Introduction, this type of consistency follows from the consistency of the corresponding regression estimator, hence any result on regression consistency translates into a result on risk consistency.
It is the aim of this section to discuss the applicability of kernel classification with memory in hidden Markov models. Assume that the training sequence is generated according to a hidden Markov model and that there is a sequence of observations
which stems from the stationary hidden Markov model with the same transition matrix and the same kernel and is stochastically independent of the training sequence. For the classification of the usual kernel classification rule as described above would classify as belonging to the class
This ignores the Markovian structure, so we want to use memory as in the optimal classification of the preceding Section 2. We propose the following procedure. Fix some memory prescribing the number of preceding observations used in the classification of the current one. Use a kernel
and, assuming a training sequence of size , classify observation as originating from the class
Compared to rules without memory the role of is taken by whereas the role of is taken by .
The approach we propose here leads to a risk consistent procedure for hidden Markov models, i.e. the risk converges to the corresponding Bayes risk when, for fixed , the size of the training sample tends to . The proof of this risk consistency adapts the methods of proof for the i.i.d. case to the Markov model we have here. We present the basic facts here and refer to  for a detailed treatment; see also [4, Chapter 13].
The kernel has to satisfy that for any
for a.a. as . Any kernel such that , is bounded with bounded support, and there exist such that for , fulfills the above condition, see, e.g., [2, 10.1]. We call such a kernel regular.
Next note that we consider a uniformly ergodic transition matrix for our Markov chain. Looking at the hidden Markov model forward in time, the process forms a Markov chain with state space in discrete time. The stationary distribution for this process is given by
It follows immediately that this process is again uniformly ergodic such that for , the constants for the -process, it holds that for all
In exactly the same manner, the process is a Markov chain with state space and stationary probability distribution given by
the ’s denoting the transition probabilities for the original chain. It is easily seen that this process is again uniformly ergodic where, with the same constants , we have for all
Finally we note that any uniformly ergodic process is geometrically mixing in the sense that there exist such that for all
for any measurable and any initial distribution, see [10, Theorem 16.1.5].
Using the foregoing notations we can obtain the following result.
Let be regular and let be such that and . Denote the risk of the kernel classification rule by . Then as
The proof is based on the observation that classification is easier than regression function estimation, see [2, 6.7]. To adapt this to our setting fix . Let for
This is the kernel regression function estimator of size for
with corresponding kernel classification rule
Now to show it is enough to show that as
in probability for almost all , see [2, Theorem 6.5]. For this we may apply [8, Theorem 1] and the application to kernel regression estimators in ibid, part 3, in particular the representation for , p.138. We than use uniform ergodicity, geometric mixing, regularity of the kernel, together with to infer that the conditions to apply [8, Theorem 1 (i)] are fulfilled. This then shows the assertion. ∎
This method of using past information to construct a classification procedure seems generally applicable. We simply have to replace by and the learning sequence by . E.g. for a nearest neighbor classification we would look for the nearest neighbor among the and use the resulting for classification. To show consistency, we can proceed in the same way as for kernel classification using the nearest neighbor regression estimate, compare [8, Part 4].
So asymptotically as tends to infinity, the kernel classification rule performs as the optimal rule of Section 2 and may be used as a typical nonparametric rule to test the usefulness of invoking preceding information.
From a practical point of view we now comment on the performance in simulations and in recognition problems for isolated handwritten letters which points to a saving in misclassifications.
In the following we report on some typical results in our studies of the actual behaviour of the kernel classification rule as proposed in this paper. As a general experience we point out that memory did not lead to significant improvement over so that we only compare the cases .
(i) In simulations 1 and 2 we choose as a Markov chain with 4 states and transition probabilities
with stationary distribution .
In simulation 3 we choose i.i.d. following the stationary distribution. The
So there is good distinction between all classes in simulation 1 with easy classification, there is poor distinction between classes 3 and 4 in simulations 2 and 3. The following table gives the error rate for classification with size of the training sequence in the first row. A normal kernel is used.
This shows that use of the Markov structure in simulation 2 through leads to the possibility of distinguishing between classes 3 and 4. In the i.i.d. case of simulation 3 an appeal to memory of course does not help.
(ii) The classification of handwritten isolated capital letters was performed using kernel methods. Features were obtained by transforming handwritten letters into grey-value matrices. The learning sequence was obtained by merging samples from seven different persons.
The following typical error rates resulted from the classification of the word SAITE (german for ’string’) where error rates are writer dependent. A normal kernel was used with and .
Use of the Markov structure through seems to lead to improved performance. Of course, the incorporation of memory can be applied to any procedure of pattern recognition. In particular we have also looked into nearest neighbor rules with memory . Our findings have been similar to those for the kernel rule as discussed above and also advocate the use of memory .
4 Proofs for Section 2
Let . Consider a hidden Markov model which starts in some time point and let .
For any and we have