Fast Bayesian Feature Selection for High Dimensional Linear Regression in Genomics via the Ising Approximation

07/30/2014
by   Charles K. Fisher, et al.
0

Feature selection, identifying a subset of variables that are relevant for predicting a response, is an important and challenging component of many methods in statistics and machine learning. Feature selection is especially difficult and computationally intensive when the number of variables approaches or exceeds the number of samples, as is often the case for many genomic datasets. Here, we introduce a new approach -- the Bayesian Ising Approximation (BIA) -- to rapidly calculate posterior probabilities for feature relevance in L2 penalized linear regression. In the regime where the regression problem is strongly regularized by the prior, we show that computing the marginal posterior probabilities for features is equivalent to computing the magnetizations of an Ising model. Using a mean field approximation, we show it is possible to rapidly compute the feature selection path described by the posterior probabilities as a function of the L2 penalty. We present simulations and analytical results illustrating the accuracy of the BIA on some simple regression problems. Finally, we demonstrate the applicability of the BIA to high dimensional regression by analyzing a gene expression dataset with nearly 30,000 features.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

11/03/2014

Bayesian feature selection with strongly-regularizing priors maps to the Ising Model

Identifying small subsets of features that are relevant for prediction a...
11/23/2014

Diversifying Sparsity Using Variational Determinantal Point Processes

We propose a novel diverse feature selection method based on determinant...
06/10/2017

Stepwise regression for unsupervised learning

I consider unsupervised extensions of the fast stepwise linear regressio...
02/18/2013

Feature Multi-Selection among Subjective Features

When dealing with subjective, noisy, or otherwise nebulous features, the...
09/30/2017

Testing for Feature Relevance: The HARVEST Algorithm

Feature selection with high-dimensional data and a very small proportion...
06/29/2019

Most abundant isotope peaks and efficient selection on Y=X_1+X_2+... + X_m

The isotope masses and relative abundances for each element are fundamen...
03/10/2020

Short-Term Forecasting of CO2 Emission Intensity in Power Grids by Machine Learning

A machine learning algorithm is developed to forecast the CO2 emission i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Linear regression is one of the most broadly and frequently used statistical tools. Despite hundreds of years of research on the subject Legendre (1805), modern applications of linear regression to large datasets present a number of new challenges. Modern applications of linear regression, such as Genome Wide Association Studies (GWAS), often consider datasets that have at least as many potential variables (or features) as there are data points McCarthy et al. (2008). Applying linear regression to high dimensional datasets often involves selecting a subset of relevant features, a problem known as feature selection in the literature on statistics and machine learning Guyon and Elisseeff (2003). Even for classical least-squares linear regression, it turns out that the associated feature selection problem is quite difficult Huo and Ni (2007).

The difficulties associated with feature selection are especially pronounced in genomics and GWAS. In general, the goal of many genomics studies is to identify a relationship between a small number of genes and a phenotype of interest, such as height or body mass index McCarthy et al. (2008); Peng et al. (2010); Burton et al. (2007); Subramanian et al. (2005); Wu et al. (2009). For example, many GWAS seek to identify specific genetic mutations (called single nucleotide polymorphisms or SNPs) that best explain the variation of a quantitative trait, such as height or body mass index, in a population Yang et al. (2012)

. Using various techniques, the trait is regressed against binary variables representing the presence or absence of the SNPs in order to find a subset of SNPs that are highly explanatory for the trait

Wu et al. (2009); Peng et al. (2010). Although the number of individuals genotyped in such a study may be in the thousands or even tens of thousands, this pales in comparision to the number of potential SNPs which can be in the millions McCarthy et al. (2008). Moreover, the presence or absence of various SNPs tend to be correlated due to chromosome structure and genetic processes that induce so-called linkage disequilibrium Yang et al. (2012). As a result, selecting the best subset of SNPs for the regression involves a search for the global minimum of a landscape that is both high dimensional (due to the large number of SNPs) and rugged (due to correlations between SNPs).

The obstacles that make feature selection difficult in GWAS also occur in many other applications of linear regression to big datasets. In fact, the task of finding the optimal subset of features is proven, in general, to be NP-hard Huo and Ni (2007). Therefore, it is usually computationally prohibitive to search over all possible subsets of features and one has to resort to other methods of feature selection. For example, forward (or backward) selection adds (or eliminates) one feature at a time to the regression in a greedy manner Guyon and Elisseeff (2003)

. Alternatively, one may use heuristic methods such as Sure Independence Screening (SIS)

Fan and Lv (2008), which selects features independently based on their correlation with the response, or Minimum Redundancy Maximum Relevance (MRMR) Ding and Peng (2005), which penalizes features that are correlated with each other. The most popular approaches to feature selection for linear regression, however, are penalized least-squares methods Hoerl and Kennard (1970); Tibshirani (1996); Zou and Hastie (2005); Candes and Tao (2007)

that introduce a penalty function that penalizes large regression coefficients. Common choices for the penalty function include a L2 penalty, called ‘Ridge’ regression

Hoerl and Kennard (1970)

, and a L1 penalty, commonly referred to as LASSO regression

Tibshirani (1996).

Penalized methods for linear regression typically have natural interpretations as Bayesian approaches with appropriately chosen prior distributions. For example, L2 penalized regression can be derived by maximizing the posterior distribution obtained with a Gaussian prior on the regression coefficients. Similarly, L1 penalized regression can be derived by maximizing the posterior distribution obtained with a Laplace (i.e. double-exponential) prior on the regression coefficients. Within a Bayesian framework, relevant features are those with the highest posterior probabilities. However, calculating exact marginal posterior probabilities is generally intractable for high dimensional problems; as a result, the posterior distribution of feature relevance must be explored using Monte Carlo simulations, highlighting the crucial need for new approaches to feature selection George and McCulloch (1993); Li and Zhang (2010).

Inspired by the success of statistical physics approaches to hard problems in computer science Monasson et al. (1999); Mézard et al. (2002) and statistics Bialek et al. (1996); Nemenman and Bialek (2002); Balasubramanian (1997); Malzahn and Opper (2005); Periwal (1997)

, we study high dimensional regression in the “strongly-regularized regime” where the prior distribution has a profound influence on the estimators. In the regime where the regression problem is strongly regularized by the prior, we show that the marginal posterior probabilities of feature relevance for L2 penalized regression are well-approximated by the magnetizations of an appropriately chosen Ising model. For this reason, we call our approach the Bayesian Ising Approximation (BIA) of the posterior distribution. Using the BIA, the posterior probabilities can be computed without resorting to Monte Carlo simulation using an efficient mean field approximation that facilitates the analysis of very high dimensional datasets. We envision the BIA as part of a two-stage procedure where the BIA is applied to rapidly screen irrelevant variables, i.e. those that have low rank in posterior probability, before applying a more computationally intensive cross validation procedure to infer the regression coefficients for the reduced feature set. Our work is especially well suited to modern feature selection problems where the number of features,

, is often larger than the sample size, .

Our approach differs significantly from previous methods for feature selection. Traditionally, penalized regression and related Bayesian approaches have focused on the “weakly-regularized regime” where the effect of the prior is assumed to be negligable as the sample size tends to infinity. The underlying intuition for considering the weak-regularization regime is that as long as the prior (i.e. the penalty parameter) is strong enough to regularize the inference problem, a less influential prior distribution should be better suited for feature selection and prediction tasks because it “allows the data to speak for themselves”. In the machine learning literature, the penalty parameter is usually chosen using cross validation to maximize out-of-sample predictive ability Tibshirani (1996); Zou and Hastie (2005)

. A similar aesthetic is also reflected in the abundant literature on ‘objective’ priors for Bayesian inference

Ghosh et al. (2011). As expected, these weakly regularizing approaches perform well when the sample size exceeds the number of features (). However, for high dimensional inference where the number of features can greatly exceed the sample size (), very strong priors may be required. Our BIA approach exploits the large penalty parameter in this strongly regularized regime to efficiently calculate marginal posterior probabilities using methods from statistical physics.

The paper is organized as follows: in Section II, we review Bayesian linear regression; in Section III, we derive the BIA using a series expansion of the posterior distribution and describe the associated algorithm for variable selection; and in Section IV we present (A) analytical results and simulations on the performance of the BIA using features with a constant correlation, (B) we analyze a real dataset for predicting bodyfat percentage from 12 different body measurements, and (C) we analyze a real dataset for predicting a quantitative phenotypic trait from data on the expression of 28,395 genes in soybeans.

Ii Bayesian Linear Regression

In this section, we briefly review the necessary aspects of Bayesian linear regression. This entire section follows standard arguments, the details of which can be found in many textbooks on Bayesian statistics e.g. 

O’Hagan et al. (2004). The goal of linear regression is to infer the set of coefficients for that describe the relationship from observations for . Here, is a (vector of features and

is a Gaussian distributed random variable with unknown variance

. Without loss of generality, we will assume throughout this paper that the data are standardized with , , , and so that it is not necessary to include an intercept term in the regression. Penalized least-squares methods estimate the regression coefficients by minimizing a convex objective function in the form of:

(1)

where is a function that penalizes large regression coefficients and is the strength of the penalty. Common choices for the penalty function include for L2 penalized or ‘Ridge’ regression Hoerl and Kennard (1970), and for L1 penalized or LASSO regression Tibshirani (1996). The standard least-squares (and maximum likelihood) estimate is recovered by setting , where is the design matrix with rows . Adding a penalty to the least-squares objective function mitigates instability that results from computing the inverse of the matrix. In the case of the L1 penalty, many of the regression coefficients end up being shrunk exactly to zero resulting in a type of automatic feature selection Tibshirani (1996); Zou and Hastie (2005); Candes and Tao (2007).

Bayesian methods combine the information from the data, described by the likelihood function, with a priori knowledge, described by a prior distribution, to construct a posterior distribution that describes one’s knowledge about the parameters after observing the data. In the case of linear regression, the likelihood function is a Gaussian:

(2)

In this work, we will use standard conjugate prior distributions for

and given by where:

(3)
(4)

These distributions were chosen because they ensure that the posterior distribution can be obtained in closed-form O’Hagan et al. (2004). Here, we have introduced a vector () of indicator variables so that if and if . We also have to specify a prior for the indicator variables, which we will set to a flat prior for simplicity. In principle, , and the penalty parameter on the regression coefficients, , are free parameters that must be specified ahead of time to reflect our prior knowledge. We will discuss these parameters in the next section.

We have set up the problem so that identifying which features are relevant is equivalent to identifying those features for which . Therefore, we need to compute the posterior distribution for

, which can be determined from Bayes’ theorem:

(5)

where represents a constant and is the sum of the squared residual errors. In this expression, , is the number of variables with , is the identity matrix, and is a restricted design matrix which only contains rows corresponding to features where . The sum of the squared residual errors is given by , where is the Bayesian estimate for the regression coefficients corresponding to those variables for which .

In these expressions, notice that the Bayesian estimate for the regression coefficients conditioned on (i.e. ) is equivalent to an estimate obtained with L2 penalized regression. That is, we have specifically chosen these priors to correspond to a Bayesian formulation of L2 penalized regression. Furthermore, we note that the logarithm of the posterior distribution can be partitioned into an ‘entropic’ term () measuring the variance of the posterior distribution, and an ‘energetic’ term () quantifying the fit to the data.

Iii The Ising Approximation

iii.1 Strongly Regularized Expansion

In principle, one can directly use Equation 5 to estimate the relevance of each feature using two different approaches. First, we could find the

that maximizes the posterior probability distribution. Alternatively, we could compute the marginal probabilities of feature relevance,

, where is the expectation value of with respect to the posterior distribution, and select the features with the largest . In the Bayesian setting, these two point estimates result from the use of different utility functions Berger (1985). Here, we will focus on computing the latter, i.e. the expected value of . The expectation values cannot be evaluated analytically due to the cumbersome restriction of the design matrix to those variables for which . Moreover, although the computation of the expectation values can be performed using Monte Carlo methods George and McCulloch (1993); Li and Zhang (2010), the numerical calculations often take a long time to converge for high dimensional inference problems.

Our main result – which we call the Bayesian Ising Approximation (BIA) of the posterior distribution for feature selection – is that a second order series expansion of Equation 5 in corresponds to an Ising model described by:

(6)

where is the matrix trace operator and the external fields and couplings are defined as:

(7)
(8)

Here, is the Pearson correlation coefficient between variables and

. In writing this expression we have assumed that the hyperparameters

and are small enough to neglect, though this assumption is not necessary. A detailed derivation of this result is presented in the Appendix.

The series expansion converges as long as for all , which defines the regime that we call ‘strongly regularized’. Since is the restricted design matrix for standardized data, we can relate to the covariances between ’s. In particular, Gershgorin’s Circle Theorem Varga (2010) implies that the series will converge as long as where (see Appendix). For large , we can replace by the root-mean-squared correlation between features, . This defines a natural scale,

(9)

for the penalty parameter at which the BIA is expected to breakdown. We expect the BIA to be accurate when and to breakdown when .

Because higher order terms in the series can be neglected, the strongly regularized expansion allows us to remove any references to the restricted design matrix, and maps the posterior distribution to the Ising model, which has been studied extensively in the physics literature. To perform feature selection, we are interested in computing marginal probabilities , where we have defined the magnetizations . While there are many techniques for calculating the magnetizations of an Ising model, we focus on the mean field approximation which leads to a self-consistent equation Opper and Winther (2001):

(10)

This mean field approximation provides a computationally efficient tool that approximates Bayesian feature selection for linear regression, requiring only the calculation of the Pearson correlations and solution of Equation 10.

iii.2 Computing the Feature Selection Path

As with other approaches to penalized regression, our expressions depend on a free parameter () that determines the strength of the prior distribution. As it is usually difficult, in practice, to choose a specific value of ahead of time it is often helpful to compute the feature selection path; i.e. to compute over a wide range of ’s. Indeed, computing the variable selection path is a common practice when applying other feature selection techniques such as LASSO regression. To obtain the mean field variable selection path as a function of , we notice that and so define the recursive formula:

(11)

with a small step size . We have set in all of the examples presented below.

iii.3 Remarks

The BIA provides a computationally efficient framework to calculate posterior probabilities of feature relevance as a function of without Monte Carlo simulations. The local fields and couplings of the BIA (Eqs. 7, 8) are simple functions of Pearson correlation coefficients. The most challenging computational aspect of feature selection with the BIA is the large amount of memory required for storing the coupling matrix for very high dimensional problems. One potential route for decreasing the memory requirement is to use adaptive thresholding estimators for the correlations to obtain a sparse coupling matrix, though we do not explore this idea further in this work because memory requirements did not cause a problem for our examples even when considering datasets with features.

To first order in , the posterior distribution corresponds to an Ising model with fields and couplings given by and . That is, the spin variables representing feature relevance are independent, and the probability that a feature is relevant is only a function of its squared correlation with the response. Specifically, if and if . Therefore, the BIA demonstrates that methods that rank features by their squared Pearson correlation with the response, such as Sure Independence Screening Fan and Lv (2008), are actually performing a first order approximation to Bayesian feature selection in the strongly regularized limit.

The couplings between the spin variables representing feature relevance enter into the BIA with the second order term in . A positive (or ‘ferrogmagnetic’) coupling between spins and favors models that include both features and , whereas a negative (or ‘antiferromagnetic’) coupling favors models that include one feature or the other, but not both. In general, the coupling terms are antiferromagnetic for highly correlated variables, which minimizes the redundancy of the feature set.

Iv Examples

We have chosen three examples to illustrate different characteristics of the BIA for Bayesian feature selection. (A) First, we consider regression problems with features that have a constant correlation . We present some simple analytic expressions in the large limit that illustrate how different aspects of the problem affect feature selection performance, and study some simulated data. (B) Next, we analyze a dataset on the prediction of bodyfat percentage from various body measurements. The number of features () is small enough that we can compute the exact posterior probabilties and, therefore, directly assess the accuracy of the BIA for these data. (C) Finally, we demonstrate the applicability of the BIA for feature selection on high dimensional regression problems by examining a dataset relating the expression of genes to the susceptibility of soybean plants to a pathogen.

iv.1 Features with a Constant Correlation

Intuitively, one may expect that correlations between features are detrimental to feature selection. Indeed, previous observations on feature selection with LASSO have demonstrated the negative impact of inter-feature correlations on variable selection performance Tibshirani (1996); Zou and Hastie (2005). Given these observations, we use this section to analyze a simple model of BIA feature selection that allows us to examine many of the characteristics that influence feature selection performance. Specifically, we consider a simple, analytically tractable, model in which we are given features that are correlated with each other with a constant Pearson correlation coefficient, . The response, , is a linear function of the first variables, which have equal true regression coefficients for . That is, where is a Gaussian noise. We are interested in studying the behavior of this model when the number of features is large (). To simplify analytic expressions, it is helpful to define the number of samples as , and the number of relevant features as . Furthermore, we assume that the correlation between features scales as so that the correlation between and stays constant in the large limit.

Figure 1a presents an example feature selection path computed using the BIA for a simulation of this model. This variable selection path was generated for data simulated from a linear model using with features with a constant correlation , , , and . Figure 1a demonstrates that all but one of the relevant features (red) have higher posterior probabilities than the irrelevant features (black) as long as . In fact, there is a clear gap in posterior probability separating the relevant and irrelevant features, and the correct features can be easily selected by visible inspection of the feature selection path in Figure 1a. The BIA breaks down beyond the threshold of the penalty parameter and the feature selection performance of the BIA deteriorates, as demonstrated by the mixing of the probabilities for the relevant (red lines) and irrelevant (black lines) features in Figure 1a.

Figure 1: Performance of BIA feature selection. a) An example variable selection path as a function of decreasing regularization. The relevant variables are red, and the irrelevant variables are black. The dashed vertical line is at , which is the estimated breakdown point of the approximation. Simulations were performed with , , , , and . b) A phase diagram illustrating the regions of parameter space where computed with .

Of course, we expect that the performance of the BIA for feature selection will vary depending on the characteristics of the problem. The model with equally correlated features provides a simple scenario to study which characteristics affect feature selection performance, because the correlations between the features and the standardized response can be easily computed analytically for the large sample size limit where we can neglect sample-to-sample fluctuations.

The spins characterizing the feature selection problem can be divided into two groups: relevant features with and magnetization , and irrelevant features with and magnetization . Note that an algorithm that performs perfect variable selection will have and . The Pearson correlation coefficient of a relevant feature () with the standardized response is given by:

where is an inverse signal-to-noise ratio. Similarly, the Pearson correlation coefficient of an irrelevant variable () with the standardized response is:

If we choose to ensure that the problem is always in the strongly regularized regime, the magnetizations can be computed explicity to order giving:

In general, we say that feature selection performance is good, on average, as long as , because revelant features have and irrelevant features have . Figure 1b shows that the average feature selection performance is good in this sense within a large volume of the phase space. Specifically, when:

However, even if the stronger statement is not satisfied. As a result, there is always a gap between the posterior probabilities of the relevant and irrelevant features.

Feature selection is most difficult if the features are correlated and if the number of relevant features is large compared to the sample size. Moreover, note that our choice of leads to in the large limit, indicating a high degree of uncertainty even in the regime in which the BIA is accurate and in which the signs of the magnetizations are correct. Our choice of provides much stronger regularization than the estimated breakdown point of . As a result, the absolute magnitudes of the magnetizations (i.e. ) are small, even compared to other values of for which the BIA still holds.

iv.2 Bodyfat Percentage

Figure 2: Comparison of exact Bayesian marginal probabilities to the BIA for the bodyfat data. a) Exact Bayesian marginal probabilities for decreasing regularization. b) BIA approximations of the marginal probabilities for decreasing regularization. c) Root Mean Squared Error (RMSE) between the exact and BIA probabilities as a function of decreasing regularization. The dashed vertical line is at , which is the estimated breakdown point of the approximation. The variables have been color coded (blue to red) by increasing squared Pearson correlation coefficient with the response (bodyfat percentage).

Bodyfat percentage is an important indicator of health, but obtaining accurate estimates of bodyfat percentage is challenging. For example, underwater weighing is one of the most accurate methods for measuring bodyfat percentage but it requires special equipment, e.g. a pool. Here, we analyze a well-known dataset obtained from StatLib (http://lib.stat.cmu.edu/datasets/) on the relationship between bodyfat percentage and various body measurements from men Penrose et al. (1985). The features included in our regression are: age and body mass index (), as well as circumference measurements of the neck, chest, waist, hip, thigh, knee, ankle, upper arm, forearm and wrist. All of the data were standardized to have mean zero and variance one. Therefore, there are potential combinations of features.

For our purposes, the most interesting part about the bodyfat dataset is that the number of features is small enough to compute the posterior probabilities exactly using Equation 5 by enumerating all of the feature combinations. The exact posterior probabilities as a function of are shown in Figure 2a. In the figure, we have color coded variables from blue to red in terms of increasing squared Pearson correlation coefficients with bodyfat percentage; (blue) ankle, body mass index, age, wrist, forearm, neck, upper arm, knee, thigh, hip, chest, waist (red). The posterior probabilities computed from the BIA (Equation 11) are shown in Figure 2b.

Comparing Figures 2a-b demonstrates that the posterior probabilities computed from the BIA are very accurate for , with and the root-mean-squared correlation between features. However, the approximation breaks down for as expected. Figure 2c provides another representation of the breakdown of the BIA upon approaching the breakdown point of the penalty (). The Root Mean Squared Error given by is sigmoidal, with an inflection point close to .

In the strongly regularized regime with , the exact Bayesian probabilties and those computed using the BIA both rank waist and chest circumference as the most relevant features. Below the breakdown point of the penalty parameter, however, the BIA suggests solutions that are too sparse. That is, it underestimates many of the posterior probabilities describing whether or not the features are relevant. Far below the breakdown point of the penalty parameter (beyond the range of the graph in Figure 2), the BIA ranks age and body mass index as the most relevant variables even though these have some of the smallest correlations with the response. Age and body mass index also become increasingly important for small ’s in the exact calculation; though, they are never ranked as the most relevant variables. The change in the rankings of the features as a function of highlights the importance of the coupling terms () that punish correlated features.

iv.3 Gene Expression

In 2010, the Dialogue for Reverse Engineering Assessments and Methods (DREAM) Prill et al. (2010) initiative issued a challenge to predict the response of soybean plants to a pathogen from data on gene expression Zhou et al. (2009). The training data consist of a response of different soybean plants to a pathogen along with the expressions of genes. The team (Loh et al. Loh et al. (2011)

) that achieved the highest rank correlation on a blind test set of 30 other soybean plants trained their model using elastic net regression to predict the ranks of the responses in the training set. The ranks were used rather than the actual values of the responses to mitigate the effects of outliers, and the value of the penalty parameter was chosen using cross validation. Loh et al. found that their cross validation procedure for elastic net regression favored sparse models with only a few features, and they highlighted 12 of these features that were frequently chosen by their procedure

Loh et al. (2011).

Clearly, this DREAM-5 soybean dataset presents a severely underdetermined problem, with the number of features exceeding the sample size by two orders of magnitude. Therefore, it is unsurprising, perhaps, that even the best teams achieved only modest performance on the test data Loh et al. (2011). Nevertheless, the soybean gene expression dataset presents a good benchmark to compare Bayesian feature selection with the BIA to feature selection using cross validated penalized regression for a very high dimensional inference problem.

Figure 3: Feature selection path for the gene expression data. The problem is severely under-determined, involving the prediction of a quantitative phenotype from the expressions of genes given a sample size of and, therefore, the posterior probabilities remain close to

. a) Features selected in a previous study (red lines) by cross validation with the elastic net have high ranking posterior probabilities. Gray scale represents the outer 10% quantiles (light gray), the outer 10% - 25% quantiles (gray), and the middle 50% quantiles (dark gray). b) The median (solid black line) and mean (dashed red line) percentage of features with higher posterior probabilities than those identified by Loh et al. The vertical axis is a logarithmic scale. The dashed vertical line is at

, which is the estimated breakdown point of the approximation.

We used the BIA to compute the posterior probabilities for all features as a function of . Following the lead from the team that won the DREAM-5 challenge, we chose our variable as the ranks of the responses of the soybean plants to the pathogen rather than the actual values. As before, all of the data were standarized to have mean zero and variance one. It is not particularly helpful to plot the posterior probabilities of all features. Therefore, Figure 3a compares the posterior probabilities of the 12 features highlighted by Loh et al. (red lines) to the distribution of posterior probabilities for all of the features (gray area). Here, the distribution of posterior probabilities is represented by quantiles; the gray scale represents the outer 10% quantiles (light gray), the outer 10% - 25% quantiles (gray), and the middle 50% quantiles (dark gray). Visual inspection of Figure 3a suggests that the 12 features identified by Loh et al. have some of the highest posterior probabilities among all features. Similarly, Figure 3b shows that only a small percentage of features have higher posterior probabilities than those identified by Loh et al, demonstrating that there is generally pretty good agreement between features that are predictive (i.e. those that perform well in cross validation) and those with high posterior probabilities computed with the BIA.

Although our analyses of the soybean gene expression data identifies similar features as cross validated elastic net regression, the posterior probabilities all fall in the range . The small range of posterior probabilities around the value representing random chance () that we identify is consistent with the highly variable out-of-sample performance discussed by Loh et al. The fact that there is no strong evidence favoring the selection of any of the features is not really surprising considering the vastly underdetermined nature of the problem ( and ). Moreover, the gene expression features have a root mean square correlation of . As a result, the critical value of the penalty parameter is , which is huge compared to what the breakdown point of would be if we were to assume that . Given that smaller ’s generally lead to magnetizations that are larger in absolute value, it is clear that ignoring the correlations between genes vastly inflates estimates of certainty in gene relevance. This highlights the importance of strong regularization procedures that specifically account for correlation between genes in high dimensional genomic studies.

V Discussion

To summarize, we have shown that Bayesian feature selection for L2 penalized regression, in the strongly regularized regime, corresponds to an Ising model, which we call the Ising Approximation (BIA). Mapping the posterior distribution to an Ising model that has simple expressions for the local fields and couplings using a controlled approximation opens the door to analytical studies of Bayesian feature selection using the vast number of techniques developed in physics for studying the Ising model. In fact, our analyses can be generalized to study Bayesian feature selection for many statistical techniques other than linear regression, as well as other prior distributions Fisher and Mehta (in prep.). From a practical standpoint, the BIA provides an algorithm to efficiently compute Bayesian feature selection paths for L2 penalized regression. Using our approach, it is possible to compute posterior probabilities of feature relevance for very high dimensional datasets such as those typically found in genomic studies.

Unlike most previous work of feature selection, the BIA is ideally suited for large genomic datasets where the number of features can be much greater than the sample size, . The underlying reason for this is that we work in strongly-regularized regime where the prior always has a large influence on the posterior probabilities. This is in contrast to previous works on penalized regression and related Bayesian approaches that have focused on the “weakly-regularized regime” where the effect of the prior is assumed to be small. Moreover, we have identified a sharp threshold for the regularization parameter where the BIA is expected to break down. This threshold depends on the sample size, , number of features, , and root-mean-squared correlation between features, . The threshold at which the BIA breaks down occurs precisely at the transition from the strongly-regularized to the weakly-regularized regimes where the prior and the likelihood have a comparable influence on the posterior distribution.

Our work also highlights the importance of accounting for correlations between features when assessing statistical significance in large data sets. In general, we have found that when the number of features is large, even small correlations can cause a huge reduction in the posterior probabilities of features. For example, our analysis of a dataset including the expression of 28,395 genes in soybeans demonstrates that the resulting posterior probabilities of gene relevance may be very close to value representing random chance when and the genes are moderately correlated, e.g. . This is likely to have important implications for assessing the results of GWAS studies where such correlations are often ignored.

Another implication of the small marginal posterior probabilities resulting from correlations among potential features is that it is probably not reasonable to choose a posterior probability threshold for judging significance on very high dimensional problems. Instead, the BIA can be used as part of a two-stage procedure in the same manner as Sure Independence Screening Fan and Lv (2008), where the BIA is applied to rapidly screen irrelevant variables, i.e. those that have low rank in posterior probability, before applying a more computationally intensive cross validation procedure to infer the regression coefficients. The computational efficiency of the BIA and the existence of a natural threshold for the penalty parameter where the BIA works make this procedure ideally suited for such two stage procedures.

Vi Acknowledgements

Vii Appendix

vii.1 Bayesian Linear Regression

In this section, we briefly review the necessary aspects of Bayesian linear regression. This entire section follows standard arguments, the details of which can be found in many textbooks on Bayesian statistics e.g. O’Hagan et al. (2004), and also appears in the main text; we repeat it here so that the Appendix is self contained. The goal of linear regression is to infer the set of coefficients for that describe the relationship from observations for . Here, is a ( vector of features and is a Gaussian distributed random variable with unknown variance . Without loss of generality, we will assume throughout this paper that the data are standardized with , , , and so that it is not necessary to include an intercept term in the regression. Penalized least-squares methods estimate the regression coefficients by minimizing a convex objective function in the form of:

(S1)

where is a function that penalizes large regression coefficients and is the strength of the penalty. Common choices for the penalty function include for L2 penalized or ‘Ridge’ regression Hoerl and Kennard (1970), and for L1 penalized or LASSO regression Tibshirani (1996). The standard least-squares (and maximum likelihood) estimate is recovered by setting , where is the design matrix with rows . Adding a penalty to the least-squares objective function mitigates instability that results from computing the inverse of the matrix. In the case of the L1 penalty, many of the regression coefficients end up being shrunk exactly to zero resulting in a type of automatic feature selection Tibshirani (1996); Zou and Hastie (2005); Candes and Tao (2007).

Bayesian methods combine the information from the data, described by the likelihood function, with a priori knowledge, described by a prior distribution, to construct a posterior distribution that describes one’s knowledge about the parameters after observing the data. In the case of linear regression, the likelihood function is a Gaussian:

(S2)

In this work, we will use standard conjugate prior distributions for and given by where:

(S3)
(S4)

These prior distributions were chosen so that the posterior distribution has a simple closed-form expression. Here, we have introduced a vector () of indicator variables so that if and if . We also have to specify a prior for the indicator variables, which we will set to a flat prior for simplicity. In principle, , and the penalty parameter, , are free parameters that must be specified ahead of time and reflect our prior knowledge. We will discuss these parameters more in the next section.

We have set up the problem so that identifying which features are relevant is equivalent to identifying those features for which . Therefore, we need to compute the posterior distribution for , which can be determined from Bayes’ theorem:

(S5)

where represents a constant and is the sum of the squared residual errors. In this expression, , is the number of variables with , is the identity matrix, and is a restricted design matrix which only contains rows corresponding to features where . The sum of the squared residual errors is given by , where is the Bayesian estimate for the regression coefficients corresponding to those variables for which .

vii.2 Strongly Regularized Expansion

Now, we will perturbatively study the model selection posterior distribution about the limit where is large. It is helpful to rewrite the expressions in terms of which will be the small parameter that we will use in the expansion. The log-posterior is:

where . We will expand in powers of to second order. For now, we will assume that higher order terms can be neglected, and we will ask later on when this assumption breaks down. We need:

and

Now, we can calculate:

and

Therefore, (up to a constant term):

Putting things together the log-posterior is (up to a constant term):

To simplify things a little, we will assume that and can be neglected, which gives:

(S6)

vii.3 Breakdown of the Approximation

In a truly Bayesian setting, the parameters , and are chosen ahead of time to reflect the prior knowledge of the statistician. By contrast, L2 penalized regression is also commonly used in a frequentist setting with chosen by cross-validation. In any case, the inclusion of the penalty parameter helps to regularize the inverse of , which is often of low rank. Indeed, in the high-dimensional setting with the matrix has a maximum rank of and is, therefore, never invertible. Note, however, that we can always compute the inverse of for any because the combination of a postive definite matrix with a postive semi-definite matrix is postive definite. The convergence of the series expansion for is, by and large, the factor determining the convergence of the series expansion for the log-posterior. Let’s expand the inverse of about as

The geometric series converges as long as , and truncating the series after the order term leads to an error of order . Thus, to ensure that the series converges for all we need , where is the design matrix for all features.

For large , we know that

is dominated by the largest eigenvalue,

, of . Thus, we expect the BIA series to converge if . We can place a bound on using the Gergoshin Circle Theorem Varga (2010). Since the are standardized variables, as the number of samples goes to infinity, , the matrix element in row and column of is converges to the correlation between and , . Plugging this result into the Gergoshin Cirlcle Theorem gives a bound for the largest eigenvalue, namely where . This suggests that BIA holds when . For large , we can approximate by the root-mean-squared correlation between features, . This defines a natural scale,

(S7)

for the penalty parameter at which the BIA is expected to breakdown. We expect the BIA to be accurate when and to breakdown when .

vii.4 Bayesian Ising Approximation

Equation S6 still contains design matrices () that are restricted to the features for which . To remove these restrictions, let’s introduce some binary indicator variables . Thus, if variable is included in the model and if variable is not included in the model. Also, let and be the matrix with elements . Note that standardizing all of the data leads to and , where

is the Pearson correlation between dummy variables

and . Now, we will rewrite Eq. S6 in terms of the indicator variables (), , and . We have: