Classification error in multiclass discrimination from Markov data

09/22/2015 ∙ by Sören Christensen, et al. ∙ 0

As a model for an on-line classification setting we consider a stochastic process (X_-n,Y_-n)_n, the present time-point being denoted by 0, with observables ...,X_-n,X_-n+1,..., X_-1, X_0 from which the pattern Y_0 is to be inferred. So in this classification setting, in addition to the present observation X_0 a number l of preceding observations may be used for classification, thus taking a possible dependence structure into account as it occurs e.g. in an ongoing classification of handwritten characters. We treat the question how the performance of classifiers is improved by using such additional information. For our analysis, a hidden Markov model is used. Letting R_l denote the minimal risk of misclassification using l preceding observations we show that the difference _k |R_l - R_l+k| decreases exponentially fast as l increases. This suggests that a small l might already lead to a noticeable improvement. To follow this point we look at the use of past observations for kernel classification rules. Our practical findings in simulated hidden Markov models and in the classification of handwritten characters indicate that using l=1, i.e. just the last preceding observation in addition to X_0, can lead to a substantial reduction of the risk of misclassification. So, in the presence of stochastic dependencies, we advocate to use X_-1,X_0 for finding the pattern Y_0 instead of only X_0 as one would in the independent situation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In pattern recognition, the following basic situation is considered: A random variable

consists of an observed pattern , typically , from which we want to infer the unobservable class which belongs to a given finite set of classes. Consider the case that the distribution

is known. Then the classification rule which chooses the class having maximum a posteriori probability given the observed pattern

has minimal risk of misclassification. This optimal rule is given by

where takes, in a measurable way, some value with . The minimal risk of misclassification, often termed the Bayes risk, is given by

Even though in many problems of pattern recognition the distribution of will not be known, the Bayes risk is a quantity of major importance as it provides the benchmark behaviour against which any other procedure is judged.

Let us briefly recall the i.i.d. model of supervised learning which has provided a main direction of research, see, e.g., the monograph

[2]. There, in addition to , we have a learning sequence of independent copies of , i.e. having the same distribution. This sequence is sampled independently of

and is used for learning proposes, in a statistical sense for the estimation of unknown distributions to construct the classification procedure.

In this paper we take a different approach which is motivated by an on-line classification setting which we model in the following way: There is given a stochastic process , the present time-point being denoted by 0, with observables in temporal order

from which the pattern is to be inferred. The time parameter belongs to some set of the form or, for mathematical purposes, to . So in this classification setting, previous observations may be used to classify the present observation. If is independent of the past then clearly previous observations carry no information on and our optimal classification would be given by .

But in a variety of classification problems we encounter dependence. Looking e.g. at the on-line classification of handwritten characters the dependence structure in natural language could be taken into account. In this situation, would be the current handwritten character to be classified, the unknown true character, the foregoing handwritten character would be and the unknown true character , and in general would be the -th one preceding with unknown . So, there is a well-known dependence between the ’s, described by linguists using Markov models (see, e.g., [3] for early discussions), and this dependence is of course inherited by the ’s. A popular model for this situation is given by a hidden Markov model, which we shall also use in this paper.

Coming back to the general model, we prescribe to use the present and in addition the last preceding observables. Then a classification rule with memory takes the form

for some measurable . The optimal rule is given by

with Bayes risk

Obviously

Assume that we have such a process with full past for our mathematical model. By martingale convergence it follows that for

Here it is important to point out that this paper centers around the behaviour of the optimal classification procedure in dependence on , the number of past observations used. This differs markedly from one of the main lines of research in the i.i.d. model of supervised learning where the focus is on the behaviour of classification procedures in dependence on , the size of the training sequence.

To investigate the behaviour of procedures which incorporate preceding information into classification rules we will use the setting of hidden Markov models. This class of models has been of considerable interest in the theory and applications of pattern recognition, see the monographs by [9] and by [7] from a more practical viewpoint. It provides a class which allows good modelling for various problems with dependence and still may be handled well from the analytical, the algorithmic, and the statistical point of view, see the monograph by [1]. The applications range from biology to speech recognition to finance; the above monographs contain a wealth of such examples.

A theoretical contribution to pattern recognition for such models was given by [6] where the asymptotic risk of misclassification for nearest neighbor rules in dependent models including hidden Markov models was derived. Similar models were treated in [11] to obtain consistency for certain classes of procedures, i.e. convergence of the risk of misclassification to the Bayes risk. As consistency for classification follows from consistency in the corresponding regression problem, see e.g. [2, 6.7], any result on regression consistency yields a result on classification consistency, and a wealth of such results is available, e.g. under mixing conditions. All these results invoke the convergence of the size of a training sequence to infinity and do not cover the topic of this paper. Closer to our paper is the problem of predicting from for stationary and ergodic time series, see e.g. [5, Chapter 27]. Our treatment differs as we do not have knowledge (just guesses of) in on-line pattern recognition, only that of .

The hidden Markov model as it will be used in this paper takes the following form. We assume that for each we have, written in their temporal order, observables and unobservables

. The unobservables form a Markov chain. The observables are conditionally independent given the unobservables in the form of

for some stochastic kernel and are not Markovian in general. This stochastic kernel and the transition matrix of the chain are assumed to be the same for each . But we allow for the flexibility that, for each , a different initial distribution, i.e. distribution of may occur. Note that stands for the time point in the past where our model would be started and the distribution of would not be known.

For being completely precise we would have to use the notation since, due to our flexibility in initial distribution, the distribution of and need not coincide. Hence also does not hold in general where is computed in a model started at some time , and in a model with a possibly different initial distribution. But all our bounds will only involve the transition matrix and the stochastic kernel which do not depend on the index . So we shall omit this upper index in order not to overburden our notations.

We assume that the transition matrix of the chain is such that there exists a unique stationary probability distribution

, characterized by the property that if has the distribution then all later have the same distribution . For our chain with full past we consider the stationary setting where each has the same distribution . Then of course and denoting the risk in the stationary case with an additional .

Without loss of generality we let the probability measures be given by densities with respect to some -finite measure on . So we have for all

This provides a unified treatment for the case of discrete where might be the counting measure, and for the case of Lebesgue densities where and might be -dimensional Lebesgue measure.

In Section 2 we shall show under a suitable assumption that exists and is independent of the particular sequence of initial distributions, hence . Furthermore this convergence is exponentially fast and we provide a bound for in this respect. Let us remark that, as we are looking backwards in time, the usual geometric ergodicity forward in time does not seem to yield an immediate proof. In Section 3 we introduce kernel classification rules with memory and discuss their theoretical and practical performance. Our findings indicate that it might be useful to include a small number of preceding observations, starting with , to increase the performance of classification rules with an acceptable increase in computational complexity. Various technical proofs are given in Section 4.

2 Exponential Convergence

We consider a hidden Markov model as described in the Introduction. For this model we make the following assumption:

All entries , in the transition matrix are . All densities are on .

will be assumed to hold throughout Sections 2 and 4. It implies the finiteness of the following quantities which will be used in our bounds.

Remark 2.1.

Set

Then .

Furthermore, with denoting the number of classes, let

Then .

The following result provides the main technical tool. Its proof will be given in Section 4. We use the notation for and in the same manner we use .

Theorem 2.2.

Let . Consider a hidden Markov model which starts in some time point . Let and fix . Set

and

Then for all

where for and for .

In the following corollary the probabilities and are treated. The first will pertain to a hidden Markov model which starts in some time point , the second to one which starts in some time point . Note that terms of the form are identical in both models due to the identical transition matrix and the identical kernel .

Corollary 2.3.

Let , and . Then

Proof.

We obtain

using Theorem 2.2.

We now introduce the constants used for the exponential bound.

Remark 2.4.
  1. Set

    Then

    For all

  2. For the following result we need additional constants which arise from basic Markov process theory. A transition matrix is called uniformly ergodic if there exists a unique stationary probability distribution and there exist constants such that for any Markov chain with transition matrix and any initial distribution at time

    in total variation norm . With the same meaning, also the process is called uniformly ergodic. Assumption above implies uniform ergodicity, so that we have for each Markov chain constants as above, see [10, Chapter 16] for a general treatment.

Theorem 2.5.

There exist constants such that for all

if and come from the same model started at some time point ,

in the general case that and come from possibly different models, the first started in some time point , the second in some time point .

Proof.

The constants will be those introduced in Remark 2.4.
Let us firstly consider the case that and stem from the same model. Using the generic symbol to denote densities in this model we write the joint density as and the joint conditional density as . With this notation we have

furthermore

We obtain from Corollary 2.3 and conditional independence

Let us now look at the general case with models 1 and 2, stemming from model 1, from model 2 respectively. Then

From the first part of the assertion

To treat and we note that the conditional Bayes risks for time lag given are the same in both models hence the unconditional risks differ by at most the total variation distance between the two distributions of in the two models. This quantity is since both models have been running for at least time points, hence

From this we easily obtain our main result.

Theorem 2.6.

There exist constants such that for all

in particular for

Proof.

As in Theorem 2.5, the constants are those of Remark 2.4. Recall that is the Bayes risk in the stationary case. As already stated earlier, by martingale convergence. Theorem 2.5 shows

for all , proving the assertion.

3 Kernel Classification With Memory

Optimal classification procedures provide benchmarks for the actual behaviour of data driven classification procedures which do not require knowledge of the underlying distribution. A general principle from statistical classification involves the availability of a training sequence

where the have been recorded together with the . This training sequence is used for the construction of a regression estimator

which leads to the classification rule

When we choose a kernel

and use the common kernel regression estimate we arrive at the kernel classification rule

The asymptotic behaviour, as the size of the training sequence tends to infinity, has been thoroughly investigated for such classification rules and in particular for kernel classification rules. In the i.i.d. case or more generally under suitable mixing conditions, such procedures are risk consistent in the following sense: Kernel classification rules asymptotically achieve the minimal risk of misclassification for if the size of the training sequence tends to and satisfies and . As remarked in the Introduction, this type of consistency follows from the consistency of the corresponding regression estimator, hence any result on regression consistency translates into a result on risk consistency.

It is the aim of this section to discuss the applicability of kernel classification with memory in hidden Markov models. Assume that the training sequence is generated according to a hidden Markov model and that there is a sequence of observations

which stems from the stationary hidden Markov model with the same transition matrix and the same kernel and is stochastically independent of the training sequence. For the classification of the usual kernel classification rule as described above would classify as belonging to the class

This ignores the Markovian structure, so we want to use memory as in the optimal classification of the preceding Section 2. We propose the following procedure. Fix some memory prescribing the number of preceding observations used in the classification of the current one. Use a kernel

and, assuming a training sequence of size , classify observation as originating from the class

Compared to rules without memory the role of is taken by whereas the role of is taken by .

The approach we propose here leads to a risk consistent procedure for hidden Markov models, i.e. the risk converges to the corresponding Bayes risk when, for fixed , the size of the training sample tends to . The proof of this risk consistency adapts the methods of proof for the i.i.d. case to the Markov model we have here. We present the basic facts here and refer to [8] for a detailed treatment; see also [4, Chapter 13].

The kernel has to satisfy that for any

for a.a. as . Any kernel such that , is bounded with bounded support, and there exist such that for , fulfills the above condition, see, e.g., [2, 10.1]. We call such a kernel regular.

Next note that we consider a uniformly ergodic transition matrix for our Markov chain. Looking at the hidden Markov model forward in time, the process forms a Markov chain with state space in discrete time. The stationary distribution for this process is given by

It follows immediately that this process is again uniformly ergodic such that for , the constants for the -process, it holds that for all

since

In exactly the same manner, the process is a Markov chain with state space and stationary probability distribution given by

the ’s denoting the transition probabilities for the original chain. It is easily seen that this process is again uniformly ergodic where, with the same constants , we have for all

Finally we note that any uniformly ergodic process is geometrically mixing in the sense that there exist such that for all

for any measurable and any initial distribution, see [10, Theorem 16.1.5].

Using the foregoing notations we can obtain the following result.

Theorem 3.1.

Let be regular and let be such that and . Denote the risk of the kernel classification rule by . Then as

Proof.

The proof is based on the observation that classification is easier than regression function estimation, see [2, 6.7]. To adapt this to our setting fix . Let for

This is the kernel regression function estimator of size for

with corresponding kernel classification rule

Now to show it is enough to show that as

(1)

in probability for almost all , see [2, Theorem 6.5]. For this we may apply [8, Theorem 1] and the application to kernel regression estimators in ibid, part 3, in particular the representation for , p.138. We than use uniform ergodicity, geometric mixing, regularity of the kernel, together with to infer that the conditions to apply [8, Theorem 1 (i)] are fulfilled. This then shows the assertion. ∎

Remark 3.2.
  1. The more complicated result of almost sure convergence in (1) needs additional conditions; see [8, Corollary 4], [4, Chapter 13]. But here, we only need convergence in probability.

  2. This method of using past information to construct a classification procedure seems generally applicable. We simply have to replace by and the learning sequence by . E.g. for a nearest neighbor classification we would look for the nearest neighbor among the and use the resulting for classification. To show consistency, we can proceed in the same way as for kernel classification using the nearest neighbor regression estimate, compare [8, Part 4].

So asymptotically as tends to infinity, the kernel classification rule performs as the optimal rule of Section 2 and may be used as a typical nonparametric rule to test the usefulness of invoking preceding information.

From a practical point of view we now comment on the performance in simulations and in recognition problems for isolated handwritten letters which points to a saving in misclassifications.

Performance studies

In the following we report on some typical results in our studies of the actual behaviour of the kernel classification rule as proposed in this paper. As a general experience we point out that memory did not lead to significant improvement over so that we only compare the cases .

(i) In simulations 1 and 2 we choose as a Markov chain with 4 states and transition probabilities

with stationary distribution .

In simulation 3 we choose i.i.d. following the stationary distribution. The

’s have a three-dimensional normal distribution with identical covariance matrix and mean vectors

So there is good distinction between all classes in simulation 1 with easy classification, there is poor distinction between classes 3 and 4 in simulations 2 and 3. The following table gives the error rate for classification with size of the training sequence in the first row. A normal kernel is used.

sim 100 300 500
1 0 0.03 0.03 0.03
1 0.01 0.03 0.02
2 0 0.21 0.19 0.21
1 0.05 0.05 0.04
3 0 0.30 0.28 0.31
1 0.34 0.33 0.30

This shows that use of the Markov structure in simulation 2 through leads to the possibility of distinguishing between classes 3 and 4. In the i.i.d. case of simulation 3 an appeal to memory of course does not help.

(ii) The classification of handwritten isolated capital letters was performed using kernel methods. Features were obtained by transforming handwritten letters into grey-value matrices. The learning sequence was obtained by merging samples from seven different persons.

The following typical error rates resulted from the classification of the word SAITE (german for ’string’) where error rates are writer dependent. A normal kernel was used with and .


writer
1.0 0.25

1
0 0.261 0.083
1 0.012 0.007
2 0 0.334 0.115
1 0.025 0.022
3 0 0.166 0.075
1 0.030 0.024

Use of the Markov structure through seems to lead to improved performance. Of course, the incorporation of memory can be applied to any procedure of pattern recognition. In particular we have also looked into nearest neighbor rules with memory . Our findings have been similar to those for the kernel rule as discussed above and also advocate the use of memory .

4 Proofs for Section 2

Lemma 4.1.

Let . Consider a hidden Markov model which starts in some time point and let .

For any and we have

(i) for

(ii) for