Neural eliminators and classifiers

01/28/2019
by   Wlodzislaw Duch, et al.
0

Classification may not be reliable for several reasons: noise in the data, insufficient input information, overlapping distributions and sharp definition of classes. Faced with several possibilities neural network may in such cases still be useful if instead of a classification elimination of improbable classes is done. Eliminators may be constructed using classifiers assigning new cases to a pool of several classes instead of just one winning class. Elimination may be done with the help of several classifiers using modified error functions. A real life medical application of neural network is presented illustrating the usefulness of elimination.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/13/2005

Learning Multi-Class Neural-Network Models from Electroencephalograms

We describe a new algorithm for learning multi-class neural-network mode...
09/04/2019

Efficient elimination of Skolem functions in first-order logic without equality

We prove that elimination of a single Skolem function in pure logic incr...
04/20/2021

Elimination Distance to Topological-minor-free Graphs is FPT

In the literature on parameterized graph problems, there has been an inc...
10/28/2020

Predicting Classification Accuracy when Adding New Unobserved Classes

Multiclass classifiers are often designed and evaluated only on a sample...
09/09/2011

On the Practical use of Variable Elimination in Constraint Optimization Problems: 'Still-life' as a Case Study

Variable elimination is a general technique for constraint processing. I...
11/28/2019

Detection and Mitigation of Rare Subclasses in Neural Network Classifiers

Regions of high-dimensional input spaces that are underrepresented in tr...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction.

Neural, fuzzy and machine learning systems are usually applied as classifiers or approximators. In real-world problems designation of classes may be problematic due to the approximate nature of linguistic concepts labeling cases that change in a continuous way. For example, medical databases contain names of diseases that may develop in time, from mild to severe cases, with intermediate or mixed forms. Corresponding class distributions

will strongly overlap requring fuzzy class labels. The information provided in the database may be insufficient to distinguish the classes although they may be separable by some unknown features (for example, results of new medical test). In such situations reliable classification is not possible and comparison of results based on the number of classification errors may be quite misleading.

If soft class labels are needed or if insufficient number of classes is defined some conclusions can still be drawn by looking at the classification probabilities. For example, the system may assign the new case given for evaluation to the overlapping region where two or more classification probabilities have significant values, in a way creating new, mixed or border classes. Introduction of new classes cannot be done automatically and requires close collaboration with domain experts. An alternative way of solving such problems is to eliminate improbable classes, predicting that the unknown case belongs to a subset of

classes out of possible ones. To account for the possibility of class distributions overlapping in a different way in different regions of the input space the number should not be fixed. Such systems may be called eliminators since their primary goal is to eliminate with high confidence classes that are improbable.

Any model

that estimates probabilities of classification

may be used to create new, soft class labels or to eliminate some classes predicting that

belongs to two or more classes. In particular neural and neurofuzzy systems are well suited for this purpose, although they should be modified to optimize elimination of several classes rather then the prediction of a single class. Some other classification systems, such as statistical discrimination methods, support vector machines

[1]

, decision trees or the nearest neighbor methods provide only sharp yes/no classification decisions

[2]

. Detailed interpretation of a given case is possible if methods of explanatory data analysis displaying the new case in relation to the cases stored in the training database are used, or if classification confidence intervals are calculated

[3].

Our goal in this paper is twofold. In the next section problems specific to class elimination in neural networks are disussed, followed by a presentation of a universal method for estimation of probabilities that is applicable to any classifier. A real-life example of a difficult medical problem is presented in the fourth section and a short discussion concludes this paper.

2 Elimination instead of prediction

Consider a classification problem in

dimensions with two overlapping classes described by Gaussian distributions with equal covariance matrices

:

Using Bayes’ theorem the posterior probability for the first class is

[4]:

(1)

The are a priori class probabilities. Thus , where the function is:

(2)

where

(3)

and . The posterior probability is thus given by a specific logistic output function. For more than two classes normalized exponential functions (called also softmax functions) are obtained by the same reasoning:

(4)

These normalized exponential functions may be interpreted as probabilities. They are provided in a natural way by multilayer perceptron networks (MLPs). If one of the probabilities is close to 1 the situation is clear. Otherwise

belongs to the border area and a unique classification may not be possible. The domain expert should decide if it makes sense to introduce a new, mixed class, or to acknowledge that insufficient information is available for accurate classification.

2.1 Measures of classifier performance

Measures of classifier performance based on accuracy of confusion matrices do not allow to evaluate their usefulness. Introduction of risk matrices or use of receiver-operator characteristic (ROC) curves [5] does not solve the problem either.

If the standard approach fails to provide sufficiently accurate results for some classes one should either attempt to create new classes or to minimize the number of errors between a temporary new class composed of two or more distinct classes. This requires a modification of the standard cost function. Let be the true class of the vector and the probability of class calculated using the model . The neural cost function should minimizes the error:

where runs over all different classes and over all training vectors, is the true class of the vector and the function should be monotonic and positive; most often the quadratic function or the entropy-based function is used. specifies all adaptive parameters and variable procedures of the classification model that may affect the cost function.

Risk matrix of the overall classification may easily be included in this cost function. The elements of the risk matrix are proportional to the risk of assigning the class when the true class is . In the simplest case . Regularization terms aimed at minimization of the complexity of the classification model are frequently added to the cost function, allowing to avoid the overfitting problems. To improve generalization the sum should run over all training examples but the model used to compute should be created without the vector in the training set.

Another form of the cost function is also useful:

(5)

where corresponds to the best recommendation of the classifier and the kernel function measures similarity of the classes. A general expression is:

(6)

For example in the local regression based on the minimal distance approaches [6] the error function is:

(7)

where are the desired values for and are the values predicted by the model . Here the kernel function measures the influence of the reference vectors on the total error. If has a sharp high peak around the function will fit the values corresponding to the reference input vectors almost exactly and will make large errors for other values. In classification problems kernel function will determine the size of the neighborhood (around the known cases) in which accurate classification is required.

Suppose that both off-diagonal elements and

of the confusion matrix are large, i.e. that the first two classes are frequently mixed. These two classes may be separated from all the others using an independent classifier. The joint class is designated as

and the model trained with the following cost function:

(8)

where is 1 if is 1 or 2.

Training with such error function provides new, possibly simpler, decision borders. In practice one should use classifier first and only if classification is not sufficiently reliable (several probabilities are almost equal) try to eliminate subsets of classes. If joining pairs of classes is not sufficient triples and higher combinations may be considered.

In the image analysis community two coefficients, and , are commonly used to measure classifier’s performance. The coefficient [19] corrects the accuracy for chance agreement and is calculated as:

(9)

where is the number of classified cases, is the number of classes (including the “unknown" or rejected class, i.e. is the number of rows in the confusion matrix), is the number of cases correrctly assigned to the class , is the row sum for row i and is the column sum for column . The Tau coefficient [20] is calculated by:

(10)

where is the overall accuracy, is the confusion matrix element, and is the base rate (maximum a-priori probability of a class membership). This coefficent is zero for prediction accuracies equal to the base rate, negative if these predictions are below the base rate and reaches one for perfect predictions. Confidence intervals for and may be taken as [20]:

(11)
(12)

Then comparing two results the Z-score:

(13)

for statistically significant differences between these results at the 95 % confidence level corresponds to .

Although these coefficients are useful and should be used instead of quoting accuracy the problem lies in creating new classes and eliminating other classes when reliable classification is not possible. An approach to image analysis in which arbitrarily created class names are joint together has been described [21]. It is based on evaluation of class grouping using the Jeffreys-Matushita distance [22] for evaluation of separation of two distributions. Although this may be a useful approach in remote sensing applications in other applications the problem lies not in joining the whole classes but rather recognizing the border cases.

3 Calculation of probabilities

Some classifiers do not provide probabilities, therefore it is not clear how to optimized them for elimination of classes instead of selection of the most probable class. A universal solution independent of any classifier system is described below.

Real input values are obtained by measurements that are carried with finite precision. The brain uses not only large receptive fields for categorization, but also small receptive fields to extract feature values. Instead of a crisp number a Gaussian distribution centered around with dispersion should be used. Probabilities may be computed for any classification model by performing a Monte Carlo sampling from the joint Gaussian distribution for all continuous features . Dispersions define the volume of the input space around that has an influence on computed probabilities. One way to “explore the neighborhood” of and see the probabilities of alternative classes is to increase the fuzziness defining , where the parameter defines a percentage of fuzziness relatively to the range of values.

With increasing values the probabilities change. For sufficiently large the a priori class probabilities should be recovered. Even if a crisp rule-based classifier is used non-zero probabilities of classes alternative to the winning class will gradually appear. The way in which these probabilities change shows how reliable is the classification and what are the alternatives worth remembering. If the probability changes rapidly around some value the case is near classification border and an analysis of as a function of each is needed to see which features have strong influence on classification. Displaying such probabilities allows for precise evaluation of the new data also in cases where analysis of rules is too complicated. A more detailed analysis of these probabilities based on confidence intervals and probabilistic confidence intervals has recently been presented by Jankowski [7]. Confidence intervals are calculated individually for a given input vector while logical rules are extracted for the whole training set.

Confidence intervals measure maximal deviation from the given feature value (assuming that other features of the vector are fixed) that do not change the most probable classification of the vector . If this vector lies near the class border the confidence intervals are narrow, while for vectors that are typical for their class confidence intervals should be wide. These intervals facilitate precise interpretation and allow to analyze the stability of sets of rules.

For some classification models probabilities may be calculated analytically. For the crisp rule classifiers [8] a rule , which is true if and false otherwise, is fulfilled by a Gaussian number with probability:

(14)

where the logistic function has slope. For large uncertainty this probability is significantly different from zero well outside the interval . Thus crisp logical rules for data with Gaussian distribution of errors are equivalent to fuzzy rules with “soft trapezoid” membership functions defined by the difference of the two sigmoids, used with crisp input value. The slope of these membership functions, determined by the parameter , is inversely proportional to the uncertainty of the inputs.

In the C-MLP2LN neural model [9] such membership functions are computed by the network “linguistic units” . Relating the slope to the input uncertainty allows to calculate probabilities in agreement with the Monte Carlo sampling. Another way of calculating probabilities, based on the softmax neural outputs has been presented in [7].

Probabilities depend in a continuous way on intervals defining linguistic variables. The error function:

(15)

depends also on uncertainties of inputs . Several variants of such models may be considered, with Gaussian or conical (triangular-shaped) assumptions for input distributions, or neural models with bicentral transfer functions in the first hidden layer. Confusion matrix computed using probabilities instead of the number of yes/no errors allows for optimization of the error function using gradient-based methods. This minimization may be performed directly or may be presented as a neural network problem with special network architecture.

Uncertainties of the values of features may be treated as additional adaptive parameters for optimization. To avoid too many new adaptive parameters optimization of all, or perhaps of a few groups of uncertainties, is replaced by common factors defining the percentage of assumed uncertainty for each group.

This approach leads to the following important improvements for any rule-based system:

  • Crisp logical rules provide basic description of the data, giving maximal comprehensibility.

  • Instead of 0/1 decisions probabilities of classes are obtained.

  • Inexpensive gradient method are used allowing for optimization of very large sets of rules.

  • Uncertainties of inputs provide additional adaptive parameters.

  • Rules with wider classification margins are obtained, overcoming the brittleness problem of some rule-based systems.

Wide classification margins are desirable to optimize the placement of decision borders, improving generalization of the system. If the vector of an unknown class is quite typical for one of the classes increasing uncertainties of inputs to a reasonable value (several times the real uncertainty, estimated for a given data) should not decrease the probability significantly. If it does the case may be close to the class border and analysis of as a function of each is needed. These probabilities allow to evaluate the influence of different features on classification. If simple rules are available such explanation may be satisfactory.

Otherwise to gain understanding of the whole data a similarity-based approach to classification and explanation is worth trying. Prototype vectors R are constructed using a clusterization, dendrogram or a decision tree algorithm. Positions of the prototype vectors R, parameters of the similarity measures and other adaptive parameters of the system are then optimized using a general framework for similarity-based methods [10]

. This approach includes radial basis function networks, clusterization procedures, vector quantization methods and generalized nearest neighbor methods as special examples. An explanation in this case is given by pointing out to the similarity of the new case

to one or more of the prototype cases R.

Similar result is obtained if the linear discrimination analysis (LDA) is used – instead of a sharp decision border in the direction perpendicular to LDA hyperplane a soft logistic function is used, corresponding to a neural network with a single neuron. The weights and bias are fixed by the LDA solution, only the slope of the function is optimized.

4 Real-life example

Hepatobiliary disorders data, used previously in several studies [11, 12, 16, 17], contains medical records of 536 patients admitted to a university affiliated Tokyo-based hospital, with four types of hepatobiliary disorders: alcoholic liver damage (AL), primary hepatoma (PH), liver cirrhosis (LC) and cholelithiasis (CH). Each record includes results of 9 biochemical tests and a sex of the patient. The same 163 cases as in [17] were used as the test data.

In the previous work three fuzzy sets per each input were assigned using recommendation of the medical experts. A fuzzy neural network was constructed and trained until 100% correct answers were obtained on the training set. The accuracy on the test set varied from less than 60% to a peak of 75.5%. Although we quote this result in the Table 1 it seems impossible to find good criteria that will predict when the training should be stopped to give the best generalization. Fuzzy rules equivalent to the fuzzy network were derived but their accuracy on the test set was not given. This data has also been analyzed by Mitra et al. [18, 16] using a knowledge-based fuzzy MLP system with results on the test set in the range from 33% to 66.3%, depending on the actual fuzzy model used.

For this dataset classification using crisp rules was not too successful. The initial 49 rules obtained by C-MLP2LN procedure gave 83.5% on the training and 63.2% on the test set. Optimization did not improve these results significantly. On the other hand fuzzy rules derived using the FSM network, with Gaussian as well as with triangular functions, gave similar accuracy of 75.6-75.8%. Fuzzy neural network used over 100 neurons to achieve 75.5% accuracy, indicating that good decision borders in this case are quite complex and many logical rules will be required. Various results for this dataset are summarized in Table 1.

Method Training set Test set

FSM-50, 2 most prob. classes
96.0 92.0
FSM-50, class 2+3 combined 96.0 87.7
FSM-50, class 1+2 combined 95.4 86.5
Neurorule [11] 85.8 85.6
Neurolinear [11] 86.8 84.6


1-NN, weighted (ASA)
83.4 82.8
FSM, 50 networks 94.1 81.0
1-NN, 4 features 76.9 80.4
K* method 78.5
kNN, k=1, Manhattan 79.1 77.9
FSM, Gaussian functions 93 75.6
FSM, 60 triangular functions 93 75.8
IB1c (instance-based) 76.7
C4.5 decision tree 94.4 75.5
Fuzzy neural network [16, 18] 100 75.5
Cascade Correlation 71.0
MLP with RPROP 68.0
Best fuzzy MLP model [12] 75.5 66.3
C4.5 decision rules 64.5 66.3
DLVQ (38 nodes) 100 66.0
LDA (statistical) 68.4 65.0
49 crisp logical rules 83.5 63.2
FOIL (inductive logic) 99 60.1
T2 (rules from decision tree) 67.5 53.3
1R (rules) 58.4 50.3
Naive Bayes 46.6
IB2-IB4 81.2-85.5 43.6-44.6
Table 1: Results for the hepatobiliary disorders. Accuracy on the training and test sets, in %. Top results are achieved eliminating classes or predicting pairs of classes. All calculations are ours except where noted.

FSM creates about 60 Gaussian or triangular membership functions achieving accuracy of 75.5-75.8%. Rotation of these functions (i.e. introducing linear combination of inputs to the rules) does not improve this accuracy. We have also made 10-fold crossvalidation tests on the mixed data (training plus test data), achieving similar results. Many methods give rather poor results on this dataset, including various variants of the instance-based learning (IB2-IB4, except for the IB1c, which is specifically designed to work with continuous input data), statistical methods (Bayes, LDA) and pattern recognition methods (LVQ).

The best classification results were obtained with the committee of 50 FSM neural networks [14, 15] (in Table 1 shown as FSM-50), reaching 81%. The -nearest neighbors (kNN) with k=1, Manhattan distance function and selection of features gives 80.4% accuracy (for details see [13]) and after feature weighting 82.8% (the training accuracy of kNN is estimated using the leave-one-out method). K* method based on algorithmic complexity optimization gives 78.5% on the test set, with other methods giving significantly worse results.

The confusion matrix obtained on the training data from the FSM system, averaged over 5 runs and rounded to integer values is (rows - predicted, columns - required):

Looking at the confusion matrix one may notice that the main problem comes from predicting AL or LC when the true class is PH. The number of vectors that are classified incorrectly with high confidence (probability over 0.9) in the training data is 10 and in the test data 7 (only 4.3%). Rejection of these cases increases confidence in classification, as shown in Fig. 1.

Figure 1: Relation between the accuracy of classification and the rejection rate.

In [11, 12] a “relaxed success criterion” has been used, counting as a success if the first two strongly excited output neurons contain the correct class. This is equivalent to elimination of 2 classes, leaving the combination of other two as the most probable. In this case accuracy improves, reaching about 90%. In [11] two rule extraction methods, Neurorule and Neurolinear are used, and the best test set results reach 88.3% and 90.2% respectively. Unfortunately true classification accuracy results of these methods are significantly worse then those quoted in Table 1, reaching only 48.4% (Neurorule ) and 54.4% (Neurolinear ) [11] on the test set.

We have used here the elimination approach defining first a committee of 50 FSM networks that classify 81% of cases correctly with high reliability, while cases which cannot be reliably classified are passed to the second stage, in which elimination of pairs of classes (1+2 or 2+3) is made. Training a “supersystem”, with the error function given by Eq. (2.1) that tries to obtain the true class as one of the two most probable classes, gives 92% correct answers on the test and 96% on the training set. This high accuracy unfortunately drops to 87% if a threshold of is introduced for the second class. In any case reliable diagnosis of about 80% of the test cases is possible and for the half of the remaining cases one can eliminate two classes and assign the case under consideration to a mixture of the remaining two classes.

5 Discussion

Even when classification in a multi-class problem is poor a useful decision support can still be provided using a classifier that is able to predict some cases with high confidence and an eliminator that can reliably eliminate several classes. The case under consideration most probably belongs to a mixture of remaining classes. Eliminators are build by analysis of confusion matrices and training classifiers with modified error functions.

Since not all classifiers provide probabilities and thus allow to estimate the confidence in their decisions we have described here a universal way to obtain probabilities using Monte Carlo estimations. Since usually only one new case is evaluated at a time (for example in medical applications) the cost of Monte Carlo simulations is not so relevant. For rule-based systems these probabilities may be determined analytically. Application of these ideas allowed a committee of neural networks to achieve excellent results on medical data that is quite difficult to classify. Further research to determine the best ways to eliminate some classes and reliably predict mixtures of classes is under way.

References

  • [1] N. Cristianini, J. Shawe-Taylor, An introduction to support vector machines (and other kernel-based learning methods). Cambridge University Press, 2000.
  • [2] D. Michie, D.J. Spiegelhalter, C.C. Taylor (eds.),

    Machine Learning, Neural and Statistical Classification,

    Ellis Horwood, New York, 1994.
  • [3] W. Duch, Y. Hayashi, “Computational intelligence methods and data understanding,” International Symposium on Computational Intelligence, In: Quo Vadis computational Intelligence? New trends and approaches in computational intelligence. Eds. P. Sincak, J. Vascak, Springer studies in fuzziness and soft computing, Vol. 54 (2000), pp. 256-270
  • [4] C. M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1995.
  • [5] J.A. Swets, “Measuring the accuracy of diagnostic systems.” Science, Vol. 240, pp. 1285-93, 1988.
  • [6] C.G. Atkenson, A.W. Moor and S. Schaal, “Locally weighted learning,” Artificial Intelligence Review, Vol. 11, pp. 75-113, 1997.
  • [7] Duch W, Adamczak R, GrÄ…bczewski K, Jankowski N. “Neural methods of knowledge extraction.” Control and Cybernetics 29 (4) (2000) 997-1018
  • [8] W. Duch, R. Adamczak, K. Grąbczewski, “Methodology of extraction, optimization and application of logical rules." Intelligent Information Systems VIII, Ustroń, Poland, pp. 22-31, June 1999.
  • [9]

    W. Duch, R. Adamczak, K. Grąbczewski, “Extraction of logical rules from backpropagation networks." Neural Processing Letters Vol.

    7, 1-9, 1998
  • [10] W. Duch, “Similarity based methods: a general framework for classification, approximation and association”. Control and Cybernetics 29(4), 937-968, 2000.
  • [11] Y. Hayashi, R. Setiono and K. Yoshida, “A comparison between two neural network rule extraction techniques for the diagnosis of hepatobiliary disorders.” Artificial Intelligence in Medicine 20(3):205-216, 2000.
  • [12] Y. Hayashi, R. Setiono and K. Yoshida, “Diagnosis of hepatobiliary disorders using rules extracted from artificial neural networks.” In: Proc. 1999 IEEE International Fuzzy Systems Conference, Seoul, Korea, August 1999, vol. I, pp. 344-348.
  • [13] W. Duch, R. Adamczak, K. Gra̧bczewski, G. Żal, Y. Hayashi, “Fuzzy and crisp logical rule extraction methods in application to medical data." Computational Intelligence and Applications. Springer Studies in Fuzziness and Soft Computing, Vol. 23 (ed. P.S. Szczepaniak), Springer 2000, pp. 593-616
  • [14] W. Duch, G.H.F. Diercksen, “Feature Space Mapping as a universal adaptive system". Computer Physics Communication 87, 341–371, 1995.
  • [15] W. Duch, R. Adamczak, N. Jankowski (1997) “New developments in the Feature Space Mapping model." 3rd Conf. on Neural Networks, Kule, Poland, pp. 65-70, Oct. 1997.
  • [16] S.K. Pal, S. Mitra (1999) Neuro-Fuzzy Pattern Recognition. J. Wiley, New York
  • [17] Y. Hayashi, A. Imura, K. Yoshida, “Fuzzy neural expert system and its application to medical diagnosis". In: 8th International Congress on Cybernetics and Systems, New York City, pp. 54-61, 1990.
  • [18] S. Mitra, R. De, S. Pal, “Knowledge based fuzzy MLP for classification and rule generation", IEEE Transactions on Neural Networks Vol. 8, pp. 1338-1350, 1997.
  • [19] W.D. Hudson, C.W. Ramm. Correct Formulation of the Kappa Coefficient of Agreement. Photogrammetric Engineering and Remote Sensing, 53(4):421-422, April 1987.
  • [20] Z. Ma, R.L. Redmond. Tau Coefficients for Accuracy Assessment of Classification of Remote Sensing Data. Photogrammetric Engineering and Remote Sensing, 61(4):435-439, April 1995.
  • [21]

    L.V. Dutra, R. Huber, Feature Extraction and Selection for ERS-1/2 InSAR Classification. International Journal of Remote Sensing, Vol. 20, No. 5, pp. 993-1016, March 1999.

  • [22] K. Fukunaga, Introduction to Statistical Pattern Recognition. Academic Press, 1990