Bayesian Conditional Gaussian Network Classifiers with Applications to Mass Spectra Classification

08/28/2013
by   Victor Bellon, et al.
CSIC
Uni Halle
0

Classifiers based on probabilistic graphical models are very effective. In continuous domains, maximum likelihood is usually used to assess the predictions of those classifiers. When data is scarce, this can easily lead to overfitting. In any probabilistic setting, Bayesian averaging (BA) provides theoretically optimal predictions and is known to be robust to overfitting. In this work we introduce Bayesian Conditional Gaussian Network Classifiers, which efficiently perform exact Bayesian averaging over the parameters. We evaluate the proposed classifiers against the maximum likelihood alternatives proposed so far over standard UCI datasets, concluding that performing BA improves the quality of the assessed probabilities (conditional log likelihood) whilst maintaining the error rate. Overfitting is more likely to occur in domains where the number of data items is small and the number of variables is large. These two conditions are met in the realm of bioinformatics, where the early diagnosis of cancer from mass spectra is a relevant task. We provide an application of our classification framework to that problem, comparing it with the standard maximum likelihood alternative, where the improvement of quality in the assessed probabilities is confirmed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/04/2019

Learning Concave Conditional Likelihood Models for Improved Analysis of Tandem Mass Spectra

The most widely used technology to identify the proteins present in a co...
12/04/2017

Stochastic Maximum Likelihood Optimization via Hypernetworks

This work explores maximum likelihood optimization of neural networks th...
12/21/2020

Computing Maximum Likelihood Estimates for Gaussian Graphical Models with Macaulay2

We introduce the package GraphicalModelsMLE for computing the maximum li...
10/12/2011

Improving parameter learning of Bayesian nets from incomplete data

This paper addresses the estimation of parameters of a Bayesian network ...
12/05/2016

Known Unknowns: Uncertainty Quality in Bayesian Neural Networks

We evaluate the uncertainty quality in neural networks using anomaly det...
02/17/2021

Unbiased Estimations based on Binary Classifiers: A Maximum Likelihood Approach

Binary classifiers trained on a certain proportion of positive items int...
04/18/2014

CTBNCToolkit: Continuous Time Bayesian Network Classifier Toolkit

Continuous time Bayesian network classifiers are designed for temporal c...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Supervised classification is a basic task in data analysis and pattern recognition. It requires the construction of a classifier, i.e. a function that assigns a class label to instances described by a set of variables. There are numerous classifier paradigms, among which the ones based on probabilistic graphical models (PGMs)

[1], are very effective and well-known in domains with uncertainty.

A widely used assumption is that data follows a multidimensional Gaussian distribution

[2]. This is adapted for classification problems by assuming that data follows a multidimensional Gaussian distribution that is different for each class, encoding the resulting distribution as a Conditional Gaussian Network (CGN)[3]. In [4], Larrañaga, Pérez and Inza introduce and evaluate classifiers based on CGNs with a more detailed description in [5]

. They analyze different methods to identify a Bayesian network structure and a set of parameters such that the resultant CGN performs well in the classification task. In

[5]

the same authors propose to estimate the parameters directly from the sample mean and sample covariance matrix in the data, that is, using maximum likelihood (ML). Following this strategy can lead to model overfitting when data is scarce. In bioinformatics, models are sought in domains where the number of data items is small and the number of variables is large, such as classification of mass spectrograms or microarrays. To try to avoid overfitting, we propose classifiers based on CGNs that instead of estimating the parameters by ML, perform exact Bayesian averaging over them, and we conclude that they provide more accurate estimates of probabilities without sacrificing accuracy.

We start the paper by introducing Bayesian networks and reviewing their use for classification in section 2. After that, we define CGNs formally in section 3. Then, in section 4, we review the theoretical results from [1] that provide the foundation to assess parameters in CGNs using the ML principle. In section 5 we review and state in a more formal way some results appearing in [3] for averaging over parameters in CGNs. In section 6 we compare the results of both strategies over UCI datasets and in section 7 we compare them for the case of early diagnosis of ovarian cancer from mass spectra. We conclude providing future research lines in section 8.

The main contribution of the paper is noticing that, (i) current state-of-the-art work in CGN classifiers [5, 4], disregard the possibility of performing Bayesian averaging, and that (ii) the quality of the estimated probability significantly improves if we use it. Thus, we restate the results of [3] for the specific case of classification and with a clear algorithmic perspective, so that they can be easily applied by other researchers interested in reaping the benefits of Bayesian averaging in CGN classifiers.

2 Bayesian network classifiers

In this section we introduce the notation to be used in the paper, discuss what Bayesian networks are, and review different approximations to learn classifiers based on Bayesian networks in the literature.

2.1 Notation

The notation used in the paper is very similar to the one used by Bøttcher and Lauritzen in [1, 3]. Let

be a set of random variables used to describe an object. We define a set of indexes

, one for each variable, that is, . In this paper, we deal with two different types of random variables: discrete and continuous. We use and

to refer to the set of discrete and continuous random variables respectively. We assume that the set of indexes

, where and are the disjoint sets of discrete and continuous variable indexes respectively. That is: Each discrete variable takes values over the finite111

Infinite discrete random variables are not considered in this work

set and each continuous variable takes values over In the following, represents an assignment of values to the variables in . Furthermore, , where is an assignment of values to the variables in , and is an assignment of values to the variables in . Given a set of indexes , (resp. , ) represents the restriction of (resp. , ) to variables with index in (resp. , ).

2.2 Bayesian networks

A Bayesian network is a probabilistic graphical model [1, 6]

that encodes the joint probability distribution for a set of random variables

. A directed acyclic graph (DAG) , where is the set of vertexes and is the set of edges, encodes the structure of the Bayesian network. Each vertex is associated with a random variable . Let be the set of parents of in . To each vertex , we attach a probability distribution . The probability distribution encoded in the Bayesian network is

(1)

Usually, the probability distribution for each vertex is part of a parametric family, that is, depends exclusively on a set of parameters associated to vertex , which we will note . The set contains the parameters of the Bayesian network, while is its structure.

Many works in Bayesian networks make the assumption that data contains only discrete variables. Two different alternatives are usually considered in the literature when data contains both discrete and continuous variables. Eventually, continuous variables can be discretized so as to use discrete Bayesian network classifiers. Alternatively, continuous variables can be directly modeled. This is usually done by assuming that the conditional distribution of a continuous variable given their parents belongs to a parametric family. The most widely used distributional assumption is assuming conditional Gaussianity. Bayesian networks making this assumption are known as conditional Gaussian networks (CGN) and are the models that will be studied in this work.

2.3 Bayesian network classifiers

The task of classification consists in assigning an input value to one class of a given set of classes. For example, determine whether a picture should be classified as landscape or non-landscape

. Constructing a classifier to produce a posterior probability

is very useful in practical recognition situations, where it allows to take decisions based on a utility model[7]. Bayesian networks have been successfully used to construct classifiers [5, 8, 9].

Several strategies are possible to apply Bayesian networks for classification. These strategies differ on how we deal with structures and with parameters.

When several structures are possible, the simplest alternative is to select a single one and then apply any of the strategies of the previous paragraph to deal with the parameter learning. An example, when we restrict structures to trees, is the Tree Augmented Naive Bayes classifier

[8]. However, we can also perform Bayesian learning simultaneously over both structures and parameters as is done in [10, 9]. Table 1 shows some examples of the alternatives for Bayesian network classifiers.

Classifier Structure Parameters
NB [11] Fixed Point estimate by ML
LR [11] Fixed Point estimate by MCL
BIBL [12] Fixed Bayesian
TAN [8] Point estimate by ML among trees Point estimate by ML
NMA [10] Bayesian among Selective NB Bayesian
TBMATAN [9] Bayesian among trees Bayesian
Table 1: Different Bayesian network classifiers according to their strategies to learn structure and parameters.

For datasets with mixed variables (both discrete and continuous), CGN classifiers have been proposed in [13, 5]

. There, several different heuristic procedures are proposed to select a single classifier structure. Then, a point estimate for the parameters is provided using ML. In this paper we propose to perform exact Bayesian learning over the parameters in conditional Gaussian network (CGN) classifiers, making use of the results of Bøttcher in

[3]

. Bayesian learning is the best founded alternative to fit a model to data from the point of view of probability theory. Furthermore, as argued in

[14], “

the central feature of Bayesian inference, the direct quantification of uncertainty, means that there is no impediment in principle to fit problems with many parameters”.

The objective of this paper is to show that the theoretical results in [3], allow for a rigurous theoretical treatment of the parameter learning process in CGN classifiers. As a result of that, CGN classifiers that use Bayesian learning over parameters provide:

  • Equivalent results in terms of accuracy (0-1 loss).

  • Significantly more accurate results in the quality of the probabilities (measured by the average of the CLL of the correct class).

  • More flexible modeling, provided that we can incorporate prior knowledge into the classification process by means of the prior distribution assumed over parameters.

We start by formally introducing the conditional Gaussian network model.

3 Conditional Gaussian networks

Conditional Gaussian networks (CGNs) allow for efficient representation, inference and learning in Bayesian networks that have both discrete and continuous variables. Given a variable index , let be the set of discrete parents of , and be the set of continuous parents of . In a CGN, discrete variables are not allowed to have continuous parents. That is, for each we have that As a consequence, the joint probability distribution factorizes as

(2)

where (resp. ) denotes the values of the discrete random variables which are parents of (resp. ), and denotes the values of the continuous random variables which are parents of .

Furthermore, in a CGN, the local probability distributions are restricted to conditional multinomial distributions for the discrete nodes and conditional Gaussian linear regressions for the continuous nodes. In the following we provide a parameterization of a CGN.

3.1 Distribution over discrete variables

For each discrete variable index and for each cell of its parents (), its conditional distribution follows a multinomial distribution222A reference of the distributions used in the paper can be found at A

. We can parameterize it by a vector

such that

(3)
(4)

Thus, the joint distribution over discrete variables can be parameterized by the set

(5)

and in this parameterization we have that

(6)

3.2 Distribution over continuous variables

The conditional distribution for each continuous variable index , follows a Gaussian linear regression model333The model is reviewed in C. with parameters , where is the vector of regression coefficients (one for each continuous parent of plus one for the intercept) and

is the conditional variance. That is,

(7)

where

The set includes the parameters for the model of each continuous variable:

(8)

Summarizing, a CGN model is defined by: (i) its structure , (ii) the parameters for the discrete variables , and (iii) the parameters for the continuous variables

4 Parameter learning in conditional Gaussian networks: maximum likelihood

In this section we succinctly review the results in [1], providing an answer to the following question:

If we assume that our data follows a CGN model with structure , when and how can we find maximum likelihood estimates for its parameters from a sample of data ?

We want to estimate the CGN parameters from a data sample that contains observations . Each observation contains discrete and continuous variables, .

We introduce the following notation, where is a set of discrete variable indexes and is a set of continuous variable indexes :

It is known (as a consequence of Proposition 6.33 in [1]) that the ML parameters for this model can be assessed independently for the conditional distribution of each variable. Similarly to what is described in [5], for each variable we apply a composition of the results from [1] (proposition 6.9, together with the transformation formulas in page 165) to assess the ML parameters for that variable. However, in order to assess the parameters by ML, we need our sample to satisfy certain conditions. These conditions are detailed into the following definition:

Definition 1 (Acceptable sample).

Let be a CGN structure and be a sample. We say that is acceptable for if and only if

  1. For each discrete variable , for each cell we have that

  2. For each continuous variable index , for each cell we have that

    • and

    • is positive definite.

Intuitively, a sample is acceptable provided that it has enough observations of each used cell. Thus, the larger the number of dependencies in structure , the larger the size required for a sample to be acceptable.

The following result summarizes how to assess the ML parameters of the conditional distribution of each variable, if we are given an acceptable sample.

Proposition 1.

Provided is an acceptable sample for , the following procedure assesses the parameters and that maximize likelihood:

  1. For each discrete variable

    (9)
  2. For each continuous variable index , and for each

    (10)
    (11)

    where is the matrix and .

4.1 Conditional Gaussian network classifiers

For the problem of classification, given a CGN structure and an acceptable sample for , the ML parameters can be found using Proposition 1, completing a CGN model that can be used to classify by assessing for each possible class , its probability given the value of all the other attributes, , as

(12)

where can be assessed using Equations 2, 6, and 7 and the values of the ML parameters in .

5 Assessing exact BA probabilities in CGNs

An alternative to estimating the parameters by ML is performing Bayesian learning over them. Bayesian learning assumes a prior probability distribution

over the parameters and refines this knowledge from a data sample , to obtain a posterior probability distribution .

(13)

After that, the classifier assesses as

(14)
(15)

In order to use Bayesian learning, we need that both the assessment of the posterior given at Equation 13 and prediction (Equation 15) can be done efficiently. This can be accomplished if we define a family of probability distributions over the parameters that is conjugate to our model.

For conditional Gaussian networks, Bøttcher proposed such a family in [3]. In the remaining of this section we provide the details. We start by defining a distribution over the parameters, the Directed Hyper Dirichlet Normal Inverse Gamma (

). Then, we show that the hyperparameters of the

can be efficiently updated after observing a data sample and finally we show that it is easy to assess the posterior predictive probabilities when the parameters follow a

.

5.1 The Directed Hyper Dirichlet Normal Inverse Gamma distribution

The (detailed in definition 2) assumes that the parameters of the conditional distribution of each variable in the CGN are independent. Furthermore, it assumes that discrete variables follow a Dirichlet distribution for each configuration of its discrete parents and that continuous variables follow a normal inverse Gamma () for each configuration of its discrete parents.

Definition 2 ().

The parameters and of a CGN with structure follow a distribution with (hyper)parameters , noted as , where

(16)

provided that

  • For each discrete variable index and for each cell of its parents (), the parameters of the multinomial follow a Dirichlet distribution with parameters :

    (17)
  • For each continuous variable index and for each cell of its discrete parents (), the parameters of the Gaussian linear regression follow a with hyperparameters :

    (18)

5.2 Learning

Proposition 2 summarizes how the hyperparameters of a distribution should be updated provided that we observe a sample .

Proposition 2.

Given a CGN structure and assuming the parameters follow a , the posterior probability over parameters follows a , where for each and , each and each we have

(19)

and for each and each we have

(20)
(21)
(22)
(23)

The result follows from the fact that the factorizes over the structure and from the results for multinomial distributions and Gaussian linear regressions provided in B and C.

5.3 Predicting

Proposition 3 shows how we can determine the probability of a new observation in a CGN whose parameters follow a distribution.

Proposition 3.

Given a CGN structure and assuming the parameters follow a , the probability of an observation can be assessed as

(24)

where

(25)

and

(26)

where

Again, the result follows from the fact that the factorizes over the structure and from the results for multinomial distributions and Gaussian linear regressions provided in B and C.

5.4 Suggested hyperparameters

Along the line proposed in [3], we propose to use the following prior, inspired in assuming that all the variables are independent.

  1. For each discrete variable, initialize the Dirichlet distribution hyperparameters to a small positive value. For our experiments we have chosen

    (27)
  2. For each continuous variable, initialize the hyperparameters to

    (28)
    (29)
    (30)
    (31)

    where is the empirical mean, and is a diagonal matrix containing the value (the variance of variable ) in its diagonal for each variable

5.5 Algorithm and discussion

Given a sample and a structure the procedure to create a classification model using BA starts by initializing the hyperparameters of the prior distribution as suggested in section 5.4. After that, it uses Proposition 2 to assess the posterior distribution . Finally, it asseses the probability of each class given the posterior distribution, as

(32)

where is calculated using Proposition 3.

In [15], we presented the Hyper Dirichlet Normal Inverse Wishart distribution as a tool to perform exact Bayesian averaging over parameters in Markov networks when the structure was decomposable. The classifiers presented in this section are a generalization of those presented in [15], since (i) a decomposable Markov network can be represented as a Bayesian network, and (ii) the can be reparameterized as a in that Bayesian network. For the same reason, the results in [16] can be seen as a particular case of the results presented in this section.

6 Experimental comparison

In this section we compare CGN classifiers that use ML and Bayesian learning methods for the parameters. A recent thorough analysis of CGN classifiers based in ML is provided by Pérez in [5]. There, several heuristic structure learning algorithms are compared, concluding that wrapper based algorithms based on Join Augmented Naïve Bayes (JAN) structures perform better than the rest. Since we are only interested in comparing parameter learning strategies, we will restrict our comparison to these structures. Furthermore, we will use the same datasets in [5] for the comparison. Next, we quickly review JAN structures, and the heuristic procedures described in [5] to learn them.

6.1 Join Augmented Naïve Bayes structures

The Naïve Bayes classifier makes the assumption that each of the attributes is independent from all the other attributes given the class. The encoding of this strong independence assumption as a BN appears in Figure 1(a)

. Unfortunately, data rarely satisfies the assumption. Thus, better BN classifiers can be obtained by introducing dependencies between the attributes. JAN structures can be seen as naive Bayes classifiers where the variables are partitioned into groups. The new assumption is that each group of variables is independent from all other groups given the class. However, no independencies are assumed inside each of the groups. An example of BN encoding the dependencies of a JAN with three groups

and is shown in Figure 1(b).

(a) Naïve Bayes classifier
(b) Join Augented Naïve Bayes classifier
Figure 1: Example of a Naïve Bayes classifier and a Join Augmented Naïve Bayes classifier with six variables.

Wrapper algorithms [17]

have a long tradition in machine learning. For the task of structure learning, the wrapper algorithm analyzes several structures, using the training set to evaluate their performance and selecting the structure which maximizes the performance measure. In

[5], the accuracy of the structure in a 10 fold cross validation over the training set in used as performance measure.

The wrapper algorithms proposed in [5] follow a greedy search approach summarized in Algorithm 1 and differ only on the initial structure and the set of candidates considered at each step of the algorithm. Three different algorithms are proposed, the forward wrapper Gaussian joint augmented naïve Bayes (fwGJAN), the backwards wrapper Gaussian joint augmented naïve Bayes (bwGJAN) and the wrapper condensed Gaussian joint augmented naïve Bayes (wcGJAN).

The fwGJAN algorithm starts with a structure containing only the class node. At each iteration, the candidate set is constructed from the best structure so far by considering the addition of each attribute not present in the current structure, either inside one of the already existing variable groups or creating a new group of its own.

The bwGJAN algorithm starts with a naïve Bayes structure with all the attributes in the dataset. At each iteration, the candidate set is constructed from the best structure so far by (i) considering the removal of a single variable in the structure and (ii) joining two groups of variables in the structure.

The wcGJAN algorithm starts with a complete structure. At each iteration, the candidate set is constructed from the best structure so far by removing a variable from the classifier.

function LearnStructure()
      InitialStructure()
     repeat
         
          GetCandidates()
         
     until  is better than
     return
end function
Algorithm 1 General wrapper structure search algorithm

6.2 Comparison Results

Following [5]

, we use 9 UCI repository data sets, which only contain continuous predictor variables. The characteristics of each dataset appear in Table 

2.

# Dataset # classes # variables # observations
1 Balance 3 4 625
2 Block 5 10 5474
3 Haberman 2 3 307
4 Iris 3 4 150
5 Liver 2 6 345
6 Pima 2 8 768
7 Vehicle 4 19 846
8 Waveform 3 21 5000
9 Wine 3 13 179
Table 2: This table shows the characteristics of the differents datasets.

For each dataset, we ran 10 repetitions of 10-fold cross validation and assessed the accuracy: the ratio of the number of data classified correctly to the total number of data classified; and conditional log-likelihood (CLL): the sum of the logarithm of the probability assigned by the classifier to the real class. While accuracy gives us information about how many instances are correctly classified, CLL measures how accurately the probabilities for each class are estimated, which is very relevant for adequate decision making.

For each repetition of the experiment we have used the three different JAN structure learning algorithms proposed in Section 6.1. Each proposed structure is evaluated in terms of the accuracy obtained using ML for learning the parameters of the corresponding classifier. The best structure is used for the final classifier, whose parameters are learned using both ML and BMA. Since we are interested in classifiers that provide good results when data is scarce, we have performed the experiments two times, the first time learning from the complete training set and the second time discarding 80% of the data in the training set.

Datasets # 1 2 3 4 5 6 7 8 9 Total
CLL 100% 7/2
CLL 20% 9/0
ACC 100% 4/5
ACC 20% 5/4
Table 3: Summary of test for bwCGN structures. ✔    denotes a winning for BA, while ✗    denotes a loss and =    a tie.

In order to analyze the results we have performed a Mann-Whitney paired test between BA and ML for each dataset and structure. We have recorded a parameter learning method as winner every time that the test was significant with a significance level of and its rank was greater than its counterpart. If the test was not significant we recorded a draw. We provide a summary of winnings and losses for each structure in Tables 3-5 .

Datasets # 1 2 3 4 5 6 7 8 9 Total
CLL 100% 7/2
CLL 20% 9/0
ACC 100% 5/4
ACC 20% = 5/3
Table 4: Summary of test for wcCGN structures. ✔    denotes a winning for BA, while ✗    denotes a loss and =    a tie.
Datasets # 0 1 2 3 4 5 6 7 8 Total
CLL 100% 7/2
CLL 20% 9/0
ACC 100% = 4/4
ACC 20% 4/5
Table 5: Summary of test for fwCGN structures. ✔    denotes a winning for BA, while ✗    denotes a loss and =    a tie.

The results are similar for the three structure learning methods. In most of the datasets, the classifiers learned using BA provide a higher CLL than those learned using ML. That is, BA provides more accurate probability predictions. The results for accuracy seem very similar for BA and ML. Furthermore, as shown by previous research [9], the advantages of using BA are clearer as we reduce the amount of learning data. In the next section we will see that this is confirmed in a problem with highly scarce data.

7 CGN classifiers for early diagnosis of ovarian cancer from mass spectra

In this section we compare CGN classifiers using ML and BA for the task of early prediction of ovarian cancer from mass spectra. Mass spectrometry is a scientific technique for measuring the mass of ions. For clinical purposes, the mass spectrum of a sample of blood or other substance of the patient can be obtained. Mass spectra provide a wealth of information about the molecules present in the sample. In particular, each mass spectrum can be understood as a huge histogram, where the number of molecules observed in the sample is reported for each mass/charge quotient (m/z). The objective pursued is to learn to automatically distinguish mass spectra of ovarian cancer patients from those of control individuals.

The data used has been obtained from the NIH and contains high resolution spectrograms coming from surface-enhanced laser desorption/ionization time of flight mass spectrometry (SELDI-TOF MS). The dataset contains a total of 216 spectrograms, 121 from cancer patients and 95 controls. The m/z values do not coincide along the different spectrograms. Thus, to create the variables, the m/z axis data has been discretized into different bins, creating a variable for each bin, for a total of 11300 variables. Thus, the number of variables largely exceeds the number of observations. For each spectrogram, the average of the values of each bin has been assigned to that bin’s variable.

7.1 Structures

Due to the large number of attributes, none of the algorithms for structure learning reviewed in the previous section can be used. Instead, we have used two different families of structures for the CGN. Both are based on the hypothesis that those variables that represent close m/z relations are more likely to have large correlations than those whose m/z values are further away.

A -BOX structure can be defined over an ordered set of variables . We say that a set of variables is contiguous whenever The -BOX structure divides the variables into disjoint contiguous sets of variables. The network structure can be seen in Figure 2(a) and the corresponding covariance matrix in Figure 2(b). In our case the ordering is provided by the m/z value.

(a) -BOX model
(b) -BOX model
Figure 2: In a) we show the graph of a 3-BOX model for six different variables. In b) we show the connectivity matrix for a general K-BOX model.
(a) -BAND model
(b) -BAND model
Figure 3: In a) we show the graph of a 3-BAND model for six different variables. In b) we show the connectivity matrix for a general K-BAND model.

The second structure proposed is the -BAND structure. In -BAND, we assume that each variable is independent of all the remaining variables given the variables that precede it and the class variable.

The covariance matrix for a -BAND structure is a band of size around the diagonal, as is shown in Figure 3(b). An example of the structure is shown in Figure 3(a).

We ran a sequence of experiments to compare the different structures (-BOX and -BAND) and parameter learning methods (BA and ML) varying the parameter from 1 to 50.

We performed 5 repetitions of 10-fold cross validation and assessed the accuracy and CLL.

(a) Accuracy
(b) CLL
Figure 4: Prediction of ovarian cancer.In the horizontal axis we present the number of parameters in model. In the vertical axis we show the accuracy and the CLL.

Figure 4(a) shows the mean accuracy versus the number of parameters in the model. We see that -BAND models are more accurate than -BOX models. Furthermore, -BAND models learned using BA outperform those learned using ML when the number of parameters is large.

Figure 4(b) shows the mean CLL versus the number of parameters in each structure. Again, -BAND models outperform -BOX models. Furthermore for both -BOX and -BAND models, using BA significantly increases the quality of the probabilities predicted.

8 Conclusions and future work

We have analyzed two alternatives for dealing with parameters in CGNs: ML and BA. Our experiments confirm that BA results in a classifier that estimates better the probabilities of the different classes. Furthermore, we have seen that this effect shows up more clearly as the number of variables is large with respect to the number of instances. Since this situation is common in areas such as bioinformatics, we have provided an example application of this approach to the problem of diagnosing ovarian cancer from mass sprectra. Finally, an open source implementation of the algorithms described in the paper is provided for free use at

http://www.iiia.csic.es/~cerquide/pypermarkov.

In this work, we have focused on learning CGN classifiers from a generative approach. Directly maximizing the CLL following a discriminative approach is a future line of research, as it is the study of priors for Bayesian linear regression other than the .

Appendix A Distributions

Definition 3 (Multivariate normal).

We say that

follows a multivariate normal distribution with parameters

, symmetric and positive definite if

(33)
Definition 4 (Inverse Gamma).

We say that

follows an inverse gamma distribution with parameters

if

(34)
Definition 5 (Normal Inverse Gamma).

We say that and follow a normal inverse gamma distribution with parameters , symmetric and positive definite if

(35)
Definition 6 (Multivariate Student).

We say that follows multivariate Student distribution with parameters with symmetric and positive definite if

(36)
Definition 7 (Multinomial).

We say that an -valued discrete random variable taking values in set follows a multinomial distribution with parameters with and if

(37)
Definition 8 (Dirichlet).

We say that follows a Dirichlet distribution with parameters if with for all if

(38)

Appendix B Bayesian multinomial model

Let be a discrete random variable, with domain having different values, following a multinomial distribution.

(39)

If we are uncertain about the values of , we are given a vector of independent observations from experimental units, and we assume as prior , the posterior after observing is a Dirichlet distribution with

(40)

where is the number of times that value is observed in the sample.

Provided we have that with

(41)

Appendix C Bayesian linear regression

In this section we summarize the Bayesian linear results needed in the paper. The main ideas come from the seminal paper of Lindley and Smith [18]. The results are provided here for easy reference. Proofs and intuitive explanations can be found in chapter 3 of [19] and in [20].

c.1 The Gaussian linear regression model

Let be continuous random variables. The Gaussian linear regression model assumes that

(42)

where is the observation of the dependent variable, is the slope vector of regression coefficients, is the vector of regressors and is the variance.

c.2 Bayesian learning with the Gaussian linear regression model

Assume we are uncertain about the values of and . Furthermore, say that we are given a vector of independent observations on the dependent variable (or response) from experimental units. Associated with each , is a vector of regressors, say . Furthermore is the matrix of regressors with -th column being We are expected to improve our knowledge about and from and First, we need to represent our initial uncertain knowledge as a probability distribution over and . In this case the conjugate distribution is the normal inverse gamma. Thus, we assume as prior that

(43)

The posterior after observing is

(44)

where

(45)
(46)
(47)
(48)

c.3 Bayesian prediction with the Gaussian linear regression model

Let be an unknown vector of independent observations on the dependent variable from new experimental units and the corresponding observed matrix of regressors. If follow a normal inverse gamma with parameters , the probability distribution for given is

(49)

References

  • [1] S. L. Lauritzen, Graphical models, Oxford University Press, 1996.
  • [2]

    D. Geiger, D. Heckerman, Learning gaussian networks, in: Proceedings of the Tenth Annual Conference on Uncertainty in Artificial Intelligence (UAI-94), 1994, pp. 235–243.

  • [3] S. G. Bøttcher, Learning Bayesian Networks with Mixed Variables, Ph.D. thesis, Aalborg University (2004).
  • [4] P. Larrañaga, A. Pérez, I. n. Inza, Supervised classification with conditional Gaussian networks : Increasing the structure complexity from naive Bayes, International Journal of Approximate Reasoning 43 (January) (2006) 1–25.
  • [5] A. Pérez, Supervised classification in continuous domains with Bayesian networks, Ph.D. thesis, Universidad del Pais Vasco (2010).
  • [6] D. Koller, N. Friedman, Probabilistic graphical models: principles and techniques, The MIT Press, 2009.
  • [7] R. O. Duda, P. E. Hart, D. G. Stork, Pattern Classification and Scene Analysis, Wiley-Interscience, 2000.
  • [8] N. Friedman, D. Geiger, M. Goldszmidt, Bayesian network classifiers, Machine learning (29) (1997) 131–163.
  • [9] J. Cerquides, R. Lopez De Mantaras, TAN Classifiers Based on Decomposable Distributions, Machine Learning 59 (3) (2005) 323–354.
  • [10] D. Dash, G. Cooper, Model averaging for prediction with discrete Bayesian networks, The Journal of Machine Learning Research 5 (2004) 1177–1203.
  • [11]

    A. Y. Ng, M. I. Jordan, On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes, Advances in neural information processing systems (2001) 841–848.

  • [12] P. Kontkanen, P. Myllymäki, T. Silander, H. Tirri, Bayes optimal instance-based learning, ECML (1998) 77–88.
  • [13] A. Pérez, P. Larrañaga, I. Inza, Supervised classification with conditional Gaussian networks : Increasing the structure complexity from naive Bayes, International Journal of Approximate Reasoning 43 (January) (2006) 1–25.
  • [14] A. Gelman, J. Carlin, H. Stern, Rubin, Bayesian data analysis, Chapman & Hall / CRC, 2004.
  • [15] V. Bellón, J. Cerquides, I. Grosse, Gaussian Join Tree classifiers with applications to mass spectra classification, in: Sixth European Workshop on Probabilistic Graphical Models (PGM 2012), 2012, pp. 19–26.
  • [16] J. Corander, T. Koski, T. Pavlenko, A. Tillander, Bayesian Block-Diagonal Predictive Classifier for Gaussian Data, in: Synergies of Soft Computing and Statistics for Intelligent Data Analysis, 2012, pp. 543–551.
  • [17] R. Kohavi, G. John, Wrappers for feature subset selection, Artificial Intelligence (97) (1997) 273–324.
  • [18] D. Lindley, A. Smith, Bayes estimates for the linear model, JJournal of the Royal Statistical Society. Series B (Methodological) 34 (1) (1972) 1–41.
  • [19] G. Koop, Bayesian Econometrics, Wiley, 2003.
  • [20] S. Banerjee, Bayesian Linear Model: Gory Details (2012).