The Phylogenetic LASSO and the Microbiome

07/29/2016
by   Stephen T Rush, et al.
0

Scientific investigations that incorporate next generation sequencing involve analyses of high-dimensional data where the need to organize, collate and interpret the outcomes are pressingly important. Currently, data can be collected at the microbiome level leading to the possibility of personalized medicine whereby treatments can be tailored at this scale. In this paper, we lay down a statistical framework for this type of analysis with a view toward synthesis of products tailored to individual patients. Although the paper applies the technique to data for a particular infectious disease, the methodology is sufficiently rich to be expanded to other problems in medicine, especially those in which coincident `-omics' covariates and clinical responses are simultaneously captured.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

09/23/2019

Tuning parameter calibration for prediction in personalized medicine

Personalized medicine has become an important part of medicine, for inst...
10/11/2021

CAPITAL: Optimal Subgroup Identification via Constrained Policy Tree Search

Personalized medicine, a paradigm of medicine tailored to a patient's ch...
02/21/2020

Online Batch Decision-Making with High-Dimensional Covariates

We propose and investigate a class of new algorithms for sequential deci...
03/18/2021

Outcome-guided Sparse K-means for Disease Subtype Discovery via Integrating Phenotypic Data with High-dimensional Transcriptomic Data

The discovery of disease subtypes is an essential step for developing pr...
05/21/2020

The Optimal Design of Clinical Trials with Potential Biomarker Effects, A Novel Computational Approach

As a future trend of healthcare, personalized medicine tailors medical t...
07/26/2017

Making the best of data derived from a daily practice in clinical legal medicine for research and practice - the example of Spe3dLab

Forensic science suffers from a lack of studies with high-quality design...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The incorporation of the microbiome as a covariate in response to biological or clinical outcomes is becoming medically and scientifically important especially in understanding etiology and treatment of specific diseases. In addition, this aids in improved understanding of functional pathways such as the gut-brain axis, see for example [22] and [4]. Through sophisticated data capture methods, such as next generation sequencing, researchers are able to rapidly map the complex microbiome [20]. Nevertheless this line of research is challenging due to the shear high-dimensional nature of the microbiome, as well as the ultra-high and small problem. An added complexity is the manner in which the covariates are associated with each other. In terms of the ‘tree-of-life’ schematic, bacterial groups at different taxon levels have an associated phylogeny. Recent works incorporating taxonomic information use various methods to identify important taxonomic features, [6], [23], [32], [13], and [24]. Common to all these papers is the incorporation of the underlying microbiology to establish sophisticated computational schemes over and above the already demanding computational methods required to obtain the microbiome data. Due to the complexity of these methods, validation is established mainly through simulations. In this paper we profile a variable selection method that also incorporates the tree-of-life schema, similar in scope to the above papers, but different in terms of how the regularization is carried out. Extensive simulations are performed but we also obtain theoretical oracle properties for our method, which to our knowledge, is very novel for microbiome research.

We now provide a summary of the paper. As motivation, Section 2 provides the microbiology background to this paper. Although we are addressing metagenomics through next generation sequencing of the 16S rRNA gene, the methodology is sufficiently rich to incorporate other ‘-omics’ structures. In Section 3 we provide the development of our penalization of the phylogenetic tree. An application to an infectious disease is discussed in Section 4. We include a detailed discussion because this infectious disease is currently a very important public health concern, and because it is the main motivation behind the development of the methods in this paper. Further detailed simulation results are presented in Section 5 with comparisons made to other variable selection procedures. This is followed by Section 6 which establishes the main theoretical properties with all of the proofs provided in Appendix A.

2 A microbiology primer

Metagenomics, like the other ‘-omics’, bears a structural relationship between covariates. Bacteria exhibit a tree-like relationship with each other, often violated via lateral gene transfer. As a result, their systematic taxonomy is constantly in flux; see for instance the paraphyletic Clostridium [28, 17]. Thus their OTU (operational taxonomic unit) proxies, typically the 16S rRNA gene, also exhibit a tree-like structure with patterns of cycles at deep taxon classification. Our resolution depends on the length of the 16S rRNA region, the degree of lateral gene transfer, and the reliability of the reads.

In many problems, it is important to develop a means of selecting the OTUs having dominating roles in the microbial systems at hand. The relationship between OTUs is such that some of them may represent the same type of bacteria. Alternatively, some spurious OTUs are artifacts of the laboratory sequencing protocol. Rather than select OTUs on their individual merits, we want to select them based on their group affiliations.

Precisely identifying and characterizing bacteria currently requires genetic and metabolic analysis of clonal colonies in vitro. In studying the composition of microbiomes in vivo, we require a general descriptor. Absent of molecular characterization, there is no general descriptor of bacterial groups, a property which constrained bacterial research for decades. A broader analysis of mixed bacterial communities is available, sacrificing the resolution of clonal studies. This involves the collection and sequencing of genes common to all bacteria, or to the group of interest. This sequencing may be scaled up to generate billions of sequences in tandem. We refer to this as MPS (massively parallel sequencing). For an in depth description from a mathematical perspective, see [19].

In the present paper, we consider MPS data targeting the V3-V5 region of the 16S rRNA gene, whose merits we discuss in [9]. Crucially, this is a gene common to all bacteria with sufficient evolutionarily conserved variability to provide a basis for defining bacterial groups [29], albeit not the sole defining descriptor. MPS technologies are currently incapable of providing the full 16S sequence, so a hypervariable region is targeted, of which there are nine. Hypervariable is understood as evolutionarily conserved but more variable than the average over the entire sequence. The various hypervariable regions provide differing resolutions in different bacterial groups. An analysis of the precision of each region for identifying pathogenic bacteria is presented in [3]

. The V3-V5 regions provide estimates similar to the full 16S gene in terms of species richness

[30, 33], and it has been shown to provide accurate community structure and low bias estimates of some taxa [1, 26]. A limitation is its poor species level resolution, as found in [3].

To facilitate comparison between microbiomes, sequences are grouped into OTUs according to some similarity criterion. This is a data-driven proxy to species delineation; we use OTUs in an operational manner. OTUs consist of sequences which are phylogenetically close. Phylogenetic proximity between OTUs is determined by the inter-OTU phylogenetic divergence of the constituent sequences. This provides a structural relationship between the OTUs. One manner of presenting this structure is to assign phylotypes to the OTUs, sequence-based consensus taxonomical classification of the clusters. This induces a rooted tree hierarchy between the OTUs. This structure is exploited in the present article.

3 Generalized linear models and the Lasso

In this section, we provide details of what we call the phylogenetic LASSO (least absolute shrinkage and selection operator). As motivation, let us review the hierarchial H-LASSO, presented in [34] in the context of penalized least-squares. Consider the linear model

(3.1)

for response , covariates

, parameter vector

, and error . Suppose the covariates may be assigned to mutually exclusive groups , . Let be the subvector of whose coefficients correspond to the covariates in , , where denotes set cardinality. We have .

We can decompose by where and . Clearly this decomposition is not unique, but this does not ultimately matter, see Lemma 6.2 below. Let , , and define as the mapping given by .

We define the H-LASSO estimator of (3.1) via where maximizes the penalized least squares function

where the tuning parameters are fixed, denotes dot product, and is the usual -norm for . Here we penalize the groups by the middle term and the individual coefficients by the third term .

It is shown in [34] that the penalty values redistribute geometrically so that we may replace them by a common coefficient . This leads to the result that their regularization is equivalent to maximizing

The H-LASSO is thus a nonconcave variant of the group LASSO, [31].

While [34] directly treats least squares, their results generalize readily to arbitrary likelihood functions which possess an attractive oracle property, see Theorem 2, [34]. Below, we provide the generalization to accommodate increased depth to the hierarchy by framing it in terms of a taxonomy.

3.1 The Lasso

Our goal now is to create a hierarchical penalization scheme where there are multiple competing ways of grouping covariates. If these groupings have a nesting property, we can represent this as a tree, otherwise the graphical representation has cycles. In light of the above discussion, we use the convex log-likelihood function in lieu of least squares in the sequel. We frame variable selection in terms of OTU selection in metagenomic analysis, but stress that the method easily accomodates other ‘-omic’ data forms, be they from metabolomics, proteomics, and transcriptomics. Further, as life betrays the ability to transmit genetic information horizontally, we incoporate the option to use taxonomies as degenerate as the tree-of-life itself. In particular, this allows us to consider overlapping classifications of the variables, extending analysis to, say, active metabolic pathways of microbial systems or human tissues. We introduce some terminology and notation to help bridge the mathematics and systematics at hand.

Definition 3.1

Let be the column vectors of the design matrix . A taxon plural taxa is a subset of the indices and is the corresponding submatrix of . A taxon level is a collection of pairwise disjoint taxa whose union is .

Consider Table 1, below, as an example. We find that taxa and . The collection is a taxon level. In general, we subdivide the indices into taxa at taxon levels. We denote the -th taxon of the -th taxon level by or .

Index Phylum Class Order Family OTU
1 Actinobacteria Actinobacteria Bifidobacteriales Bifidobacteriaceae
2 Actinobacteria Actinobacteria Bifidobacteriales Bifidobacteriaceae
3 Firmicutes Bacilli Lactobacillales Enterococcaceae
4 Firmicutes Bacilli Lactobacillales Enterococcaceae
5 Firmicutes Bacilli Lactobacillales Enterococcaceae
6 Firmicutes Bacilli Lactobacillales Lactobacillaceae
7 Firmicutes Bacilli Lactobacillales Lactobacillaceae
8 Firmicutes Bacilli Lactobacillales Lactobacillaceae
9 Firmicutes Clostridia Clostridiale Clostridiaceae 1
10 Firmicutes Clostridia Clostridiale Clostridiaceae 1
11 Firmicutes Clostridia Clostridiale Lachnospiraceae
12 Firmicutes Clostridia Clostridiale Lachnospiraceae
13 Firmicutes Clostridia Clostridiale Lachnospiraceae
Table 1: An example taxonomy generated for thirteen OTUs using five taxon levels.
Definition 3.2

Suppose we have a collection of taxon levels, where the -th taxon level consists of singletons, . We call a taxonomy.

There will be times where we wish to refer to those indices that belong to specific taxa at each level. We have:

Definition 3.3

Let be a -tuple where taxon belongs to taxon level . We refer to as a lineage, with associated indices .

We write in lieu of and define . When we wish to make it clear we are referring to a lineage’s taxon at a particular taxon level, we write , so that may be re-written .

In our example, the lineages may be read directly off Table 1, of which there are five. Figure 1 presents the taxonomy more intuitively. Each branch of the tree represents a taxon while each horizontal row represents a taxon level, numbered 1 through 5. Each lineage is a directed path from the root to a branch.

1

2

3

4

5

Actinobacteria ()

Actinobacteria ()

Bifidobacteriales ()

Bifidobacteriaceae ()

Firmicutes ()

Bacilli ()

Lactobacillales ()

Enterococcaceae ()

Lactobacillaceae ()

Clostridia ()

Clostridiales ()

Clostridiaceae 1 ()

Lachnospiraceae ()

Figure 1: A graphical representation of the taxonomy as well as an illustration of the decomposition of the parameters following the taxonomy in Table 1.

Consider a generalized linear model with a known link function , so that . We decompose by , where as before, but now , for . We write this decomposition as . Let be the map . This decomposition is illustrated in parentheses in Figure 1 below: the coefficients are recoverd by multiplying the terms in a lineage. We extend the optimization criterion to:

(3.2)

where as usual is the log-likelihood, and for . The centre double sum is the groups penalty and addresses the increased depth of our taxonomy in contrast to [34] and is the number of taxa in taxon level . When the lineage consists of a single taxon level , this reduces to the criterion in [34].

We appear to suffer an affluence of tuning parameters. Lemma 6.1, below, shows that criterion (3.2) is equivalent to the single tuning parameter criterion

(3.3)

Let denote the vector of groups coefficients of the -th taxon level. Intuitively, as the tuning parameters are redistributed, and change in geometric response due to the relationship . Choosing , , we need only concern ourselves with one tuning parameter.

3.2 LASSO algorithm

The algorithm used to obtain the LASSO estimate relies on iterative adaptive reweighting, deriving from the reformulation of the LASSO in (3.3).

Let be the map from to the unique maximizer of (3.3) over . In this way and may be viewed as projections onto the first and second elements of . Let be a positive collection of tuning parameters . The pseudo-code is spelled out below. We remark that convergence is typically achieved quickly, so that the bottlenecks are the weighted LASSO problem and calculation of .


procedure phylasso
     for each  do
         obtain initial LASSO estimate
         for  do
              
              solve weighted LASSO for with weights
              break if threshold
         end for
     end for
end procedure
Algorithm 1 LASSO

4 Application to Infection

Clostridium difficile (C. difficile) infection (CDI) is the most frequent cause of healthcare-associated infections and its rates are growing in the community [14, 2]. One of the major risk factors for developing CDI is use of antibiotics. The healthy and diverse bacteria which reside within the colon are the major defense against the growth of C. difficile. Antibiotics kill these bacteria and allow C. difficile to multiply, produce toxins and cause disease. The available treatments for this infection are the antibiotics: metronidazole, vancomycin and fidaxomicin. The efficacy of these antibiotics is limited with high recurrence rates, [7].

An alternative to antibiotic therapy for CDI, in particular for recurrent and refractory diseases, is to infuse healthy gut bacteria directly into the colon of infected patients to combat C. difficile by a procedure known as faecal microbiota transplantation (FMT). FMT is a process in which a healthy donor’s stool is administered to an affected patient. This can be performed using a colonoscope, nasogastric tube, or enema. FMT serves to reconstitute the altered colonic flora, in contrast to treatment with antibiotic(s), which can further disrupt the establishment of key microbes essential in preventing recurrent CDI. The literature reveals a cumulative clinical success rate of over 90% in confirmed recurrent CDI cases, [7].

There is a growing interest in the microbiome of CDI patients following an FMT [22, 8, 25, 27, 21]. It is noted that there are vast differences in: the route of administration with all forms covered; donor selection criteria, some used family members, while others used universal donors; sample sizes, although all studies had small sample sizes; and sequencing procedures and equipment. Despite these differences, there are two fundamental points of agreement across all studies. The first is that CDI patients have low diversity in their microbiome, pre-FMT, and that after receiving an FMT(s), their diversity increased. The second is CDI patients who were successfully cured with FMT undergo changes in their microbiome which initially have similarities to that of their donors, see [22].

We will examine the microbiome data coming from a subset of the CDI-patients treated with FMT by the second author [10], covering the period 2008–2012.

From the 17 selected patients, we note that 13 of these patients responded to a single FMT. We used a taxonomy describing phylum, class, order, family, and genus

. The predictors consisted of the relative abundances of 220 pre-FMT OTUs and 347 post-FMT OTUs. Here we consider logistic regression to model the response for two scenarios. First we wish to know whether the pre-FMT microbiome could predict a clinical response to an FMT. Second, we wish to know whether the composition of the post-FMT microbiome could anticipate the need for additional FMTs. To select the tuning parameter for the

-LASSO, we perform leave-one-out cross-validation (LOO-CV). The optimal tuning parameter was selected by AUC (area under the curve) and BS (Brier score). This is the same dataset where phylum interaction was investigated, [15].

We are challenged by a sparse predictor matrix as well as a small sample size. Since AUC tends to assign a perfect score to the null model by the way it handles ties, we incorporate BS to compensate for this affect. The results of logistic regression using the pre-FMT OTUs as covariates are captured in Figure 2. Using the globally optimal BS of 0.208, the corresponding tuning parameter is . Relative to the BS, the locally optimal AUC value is 0.846, with the corresponding tuning parameter of . The globally optimal AUC value is 0.865 which would be associated with the null model. Below the figure in Table 2

, we display the family and genus of the selected OTUs. Using LOO-CV, the frequency along with the averaged estimate and LOO-CV standard errors are reported. Due to the small sample size the variability in the parameter estimates are large which is to be expected. Concentrating on the frequency using LOO-CV we notice that OTU 7 associated with the family

Lactobacillaceae and the corresponding genera Lactobacillus seem to have some positive predictability. This appears to be consistent with some earlier findings that the last author was involved in using a different FMT patient cohort, [22]. We note that some family/genera identification appear to be the same for different OTUs. This in interpreted to mean differences at the species taxon.

For the post-FMT OTUs, we obtain a globally optimal BS of 0.386, , with corresponding AUC value 0.923, . The globally optimal AUC is 0.961. This is displayed in Figure 3. In Table 3, we display the family and genus of the selected OTUs. Using LOO-CV, the frequency along with the averaged estimate and LOO-CV standard errors are reported. Again due to the small sample size the variability in the parameter estimates are large. Concentrating on the frequency using LOO-CV we notice that OTU 5 associated with the family Enterococcaceae and the corresponding genera Enterococcus, as well as OTU 17 corresponding to the family Bacteroidaceae and genera Bacteroides appear to have some positive predictability. This again is consistent with some earlier results, [22].

We realize that the interpretation of these results from a bacteriological perspective is well beyond the scope of this medium, and the purpose of this section is to present a portrayal of the type of statistical analysis that comes from looking at the microbiome as a covariate. We highlight above some of the more obvious interpretations that is consistent with what has been observed in other studies. Indeed, results of this work will interest researchers and companies working toward refinement of FMT, via establishment of central stool banks or creation of synthetic stool. With results obtained using larger sample sizes it will be possible to select species that come from the genera identification and culture them in a laboratory biochemistry setting. Thus one could strategically pool samples to achieve a desired composition, to supplement stool with specific microorganisms or to be able to prepare recipient-specific synthetic stool as per the RePOOPulate project, [16]. At this point however, we will end this discussion with the comment that a more in depth metageonomic analysis of a recent clinical trial, [11], is under investigation.

Figure 2: AUC and BS obtained by leave-one-out cross-validation for pre-FMT OTUs. Labeled are the tuning parameters by (a) AUC () and (b) BS ().
Phylogeny
OTU Family Genus Frequency Frequency
1 Enterobacteriaceae Klebsiella 0.24 10.8 (41.6) 0.35 -4.65 (13.7)
2 Enterobacteriaceae Escherichia/Shigella 0.59 -49.4 (122) 0.88 -107 (143)
3 Streptococcaceae Streptococcus 0.82 -304 (323) 0.76 -395 (425)
4 Lachnospiraceae Blautia 0.82 239 (281) 0.18 20.4 (66.6)
5 Enterococcaceae Enterococcus 0.47 -94.9 (137) 0.65 -164 (264)
6 Lactobacillaceae Lactobacillus 0.53 -15.5 (25.2) 0.82 -29.8 (29.6)
7 Lactobacillaceae Lactobacillus 0.88 349 (335) 0.88 387 (415)
11 Lachnospiraceae unclassified 0.53 -60.9 (77.5) 0.59 -68.6 (85.1)
13 Veillonellaceae Veillonella 0.29 78.5 (127) - -
24 Veillonellaceae unclassified 0.65 -108 (141) - -
26 Veillonellaceae unclassified - - 0.47 -97.9 (142)
30 Veillonellaceae Veillonella 0.76 176 (258) - -
33 Veillonellaceae Veillonella - - 0.53 55.6 (107)
Table 2: The selected OTUs for pre-FMT microbiomes at the tuning parameters selected by AUC () and BS () leave-one-out cross-validation on relative abundances.
Figure 3: AUC and BS obtained by leave-one-out cross-validation for post-FMT OTUs. Labeled are the tuning parameters by (a) AUC () and (b) BS ().
Phylogeny
OTU Family Genus Frequency Frequency
1 Enterobacteriaceae Klebsiella 0.94 7.79 (28.8) 0.82 8.65 (28.9)
2 Enterobacteriaceae Escherichia/Shigella 0.82 -17.6 (80.7) 0.94 -17.1 (80.9)
3 Streptococcaceae Streptococcus 0.88 -22.6 (42.2) 0.94 -52.4 (82.3)
5 Enterococcaceae Enterococcus 0.94 98.5 (261) 0.94 147 (275)
17 Bacteroidaceae Bacteroides 0.94 109 (334) 0.94 111 (333)
21 Acidaminococcaceae Acidaminococcus 0.65 -1.34 (1.85) 0.71 -15.6 (33.9)
Table 3: The selected OTUs for post-FMT microbiomes at the tuning parameters selected by AUC () and BS () leave-one-out cross-validation on relative abundances.

5 Simulations

In this section, we report on simulations that approximate the covariance structure we may see in practice with respect to phylogenetic proximity of the OTUs. We will consider taxon levels where each level is balanced and grows according to , . In taxonomy language we will consider a balanced taxonomy where we have a single ‘phylum’ () followed by ‘class’ (), ‘order’ (), ‘family’ (), ‘genus’ (), and ‘species’ (). This will then lead to 4096 OTUs ().

To each taxon we introduce taxon-wise covariation. This covariance structure is presented in Figure 4 for a single ‘class’ (1024 OTUs) using a 10,000 point sample. The true parameters in the simulation are represented by two classes, with one class dominating the other 3:1.

Figure 4: Heatmap of covariance matrix for 10,000 point validation set from the tuning parameter step. Displayed is the submatrix corresponding to a single ‘class’.

We consider random Gaussian data generated from a 4096 covariate sparse linear model where 32 covariates have corresponding parameter and the rest being zero. We present the corresponding pruned tree in Figure 5.

Figure 5: Pruned taxonomy, from an initial 1024 leaves.

The species information is deliberately lost to simulate uncertainty in species assignment to OTUs, which mimics the current technology limitation in 16s rRNA sequencing, see [3]. Thus the deepest taxon level used in the fit is the genus level. We introduce a class, order, family, genus, and species-wise covariation , , , , and

, samples from normal distributions

, , , , , respectively, where

denotes a normal distribution with mean 0 and variance

. The predictors used are then defined as where is a 4096 multivariate normal distribution with mean vector 0 and covariance matrix , the identity matrix.

5.1 Tuning parameter selection

To select the tuning parameter for each pair, we replicate 100 data sets , . We select the tuning parameter minimizing across all models the MSPE (mean squared prediction error) for an independent 10,000 data point validation set generated from the same distribution. We also fit SCAD models using the same datasets, using the same validation set to select the appropriate tuning parameter.

5.2 Performance

To evaluate performance of our models, we consider four measures for 100 replicates: SSE (sum of squared error), MSPE, ‘recall’, and ‘precision’. Recall is defined as the proportion of covariates correctly selected relative to true parameters, tp/(tp+fn), and precision is defined as the proportion of covariates correctly selected relative to total covariates selected, tp/(tp+fp), where tp is true positive, fn is false negative and fp is false positive.

We compare the performance of the

-LASSO to SCAD. We also consider OLS (ordinary least squares) for the oracle model where the exact

is known. MSPE for performance is evaluated using a new 10,000 data point validation set generated independently of the tuning validation set.

5.3 Results: Tuning

Figure 6 presents recall and precision against the tuning parameters. Also displayed is the MSPE curve, scaled to the unit interval . Note that the selected parameters lie in the middle of a relatively flat MSPE region. Perturbations of

would not overly affect predictive performance. The same holds with respect to SSE (not shown). The shapes of the MSPE curves agree in large measure, aside from the null models, which have lower SSE than the sparsest estimators. Recall and precision are at odds, where for low sample size one must be sacrificed against the other. The selection of

favours higher recall over precision; the coefficients for false positives may be very small and hence affect prediction in a minor way, but false negatives are a complete loss of a structural signal. There is an interesting dip in recall for sample sizes in the region preceding the low, stable MSPE region where the estimators overfit the data. The clear separation of these two regions reflects well on the LASSO stability.

5.4 Results: Performance

The results for SSE and MSPE are presented in Table 5. SCAD struggles with the covariance structure, performing poorly in estimation. Incredibly, it appears to perform well in MSPE. While SCAD struggles to recall the correct OTUs due to high correlation within species, to its credit it is able to select some related taxa, where the covariance structure leads to similar predictive performance but more flexibility in fitting the model. The -LASSO exhibits none of the SCAD’s difficulties. It quickly converges to agreement with the oracle estimator in estimation and prediction.

The -LASSO quickly approaches near perfect recall as sample size increases whereas SCAD becomes stalled at 25%. The -LASSO consistently improves in precision, dropping false positives. For SCAD, after an initial improvement in precision, it drops as it selects incorrect but related OTUs. When we present the -LASSO and SCAD estimates with a validation set consisting of uncorrelated covariates, there is negligible change in -LASSO performance, while the SCAD’s predictive performance matched its poor estimation performance.

5.5 Additional simulations

We comment on some additional simulations. In the low dimensional, , regime with simple sample-wise covariation structure, SCAD performs moderately better than the -LASSO, with the adaptive LASSO trailing behind. This is consistent across sample size and noise level. For alternative scenarios (), the -LASSO performs well, although experiences some difficulty with precision for . This is qualitatively different than the issue with SCAD in the previous section, as recall is nearly perfect. The false positives generally correspond to relatively small coefficients, so that it appears to result from early exit from the fitting algorithm.

Figure 6: Median recall (solid) and precision (dashed) for -LASSO against for sample sizes . Included is the median MSPE curve scaled by largest median (dotted). The ‘’ indicates the chosen () tuning parameter.
SSE MSPE
OLS -LASSO SCAD OLS -LASSO SCAD
50 88.10(46.2) 167.00(62.50) 621.00(230.00) 3.20(1.21) 14.80(17.2) 167.00(162.00)
100 19.80(6.05) 37.70(12.00) 384.00(5.28) 1.47(0.14) 2.43(0.48) 9.24(0.56)
150 11.10(3.69) 15.90(6.14) 383.00(4.47) 1.25(0.07) 1.49(0.18) 8.66(0.38)
200 8.05(2.48) 10.70(4.29) 379.00(4.35) 1.18(0.05) 1.29(0.10) 8.40(0.24)
250 6.25(1.77) 7.54(2.45) 380.00(3.36) 1.13(0.03) 1.19(0.06) 8.27(0.20)
Table 4: Estimation and prediction error for the oracle estimator (OLS), -LASSO, and SCAD. Presented are the mean error (standard error).
Recall Precision
-LASSO SCAD -LASSO SCAD
50 0.64(0.11) 0.18(0.07) 0.52(0.12) 0.50(0.34)
100 0.92(0.04) 0.25(0.00) 0.71(0.09) 0.93(0.12)
150 0.98(0.02) 0.25(0.00) 0.86(0.07) 0.75(0.16)
200 0.99(0.01) 0.25(0.00) 0.92(0.06) 0.54(0.11)
250 0.99(0.01) 0.25(0.00) 0.95(0.04) 0.56(0.11)
Table 5: Recall and precision for the -LASSO and SCAD. Presented are the mean (standard error).

6 Theoretical Results

In this section, we consider the general objective function,

where is the known but arbitrary log-likelihood, the remainder is the penalization, and is the -norm, . We will also make use of the notation ‘’ for matrix transpose, ‘’, ‘

’ to mean ‘big oh’ and ‘big oh in probability’, respectively,

, to mean ‘little oh’ and ‘little oh in probibility’, respectively, ‘’ to mean the ratio of two sequences converge to a positive constant, ‘’ to mean convergence in distribution, and ‘’ to mean minimum. All proofs are provided in Appendix A.

The first result allows us to consider a single penalty parameter, that is, set for all . Let , .

Lemma 6.1 (Equivalence of optimization)

Let , be fixed. Then is a local maximizer of if and only if is a local maximizer of and hence .

The proof is similar to that of Lemma 1 in [34], where least-squares with -regularization is considered.

In the sequel we assume that for all . This next result provides a relationship between the individual effects parameters and the group coefficients. Intuitively, the mass distributes itself geometrically to minimize the penalty terms.

Lemma 6.2 (Mass Equilibrium)

Let . Then is a global maximizer of over if and only if .

This conservation of mass result immediately leads to the following.

Corollary 6.3

The relationship from Lemma 6.2 identifies the unique maximizer of over .

Thus is an order polynomial in terms of .

Definition 6.4

Let be the map where is the unique optimizer of . We refer to as the partial inverse.

Corollary 6.5

Suppose the taxonomy has two levels, . Then the partial inverse is characterized by: if then and , if then and .

Corollary 6.5 is a generalization of Theorem 1 in [34].

6.1 LAN conditions and the Oracle

To ensure local asymptotic normality of the MLE for diverging number of parameters, we adopt the regularity conditions from [12] as in [34]:

  • For all , the observations , , are independently and identically distributed according to the density , where has common support and the model is identifiable. Further,

  • The Fisher information matrix is positive definite with bounds

    where is the point spectrum of a matrix ,

  • There exists an open subset , the number of parameters, containing the true parameter point such that for almost all , is defined for all . Further, there exist functions such that

    for all and

We call the conditions (A1)-(A3) as the LAN conditions and note that they are not excessively restrictive and falls within the usual framework of this type of analysis.

Consider the parameters where grow with , and their partial inverse . Let if , otherwise. Then assuming the relationship from Lemma 6.2 we have the following equivalent expression,

(6.1)

For convenience we will write for the projection of onto .

We have the following consistency results for the LASSO.

Theorem 6.6 (Consistency)

Assume that the distribution satisfies the LAN conditions. If and , then there exists a -consistent local maximizer of , where .

A simple choice is We find this provides us with taxon selection consistency.

Define by if and 1 otherwise. Let .

Definition 6.7

Let be an estimator for . Then is said to be consistent in group selection if as . Moreover, if for all , the estimator is model selection consistent.

Consider the true parameter (estimator) in . Suppose and define to be the restriction of to the set .

Definition 6.8

Let be a model selection consistent estimator of Suppose satisfies

where is an matrix such that , a positive semidefinite matrix, and is the Fisher information evaluated at , as . Then is said to have the oracle property.

We have the following.

Theorem 6.9 (Taxon selection consistency)

Assume that the distribution satisfies the LAN conditions. If and , then there exists a -consistent local maximizer of so that , the group sparsity property.

For the case of two taxon levels, , the above result is Theorem 2, [34]

. Using an adaptive modification to their loss function analogous to

[35], [34] obtains the full oracle property. We obtain the oracle property in an alternative manner, by a simple modification of the taxonomy of the variables. The full proofs are quite involved and are relegated to Appendix A. Since we have now shown that consistent taxon selection holds for arbitrary number of taxon levels, we have the full oracle property in the following result using a simple modification of our taxonomy. This is possible as we are able to incorporate , arbitrary, taxon levels.

Theorem 6.10 (Oracle Property)

In addition to the LAN conditions, suppose that the taxonomy contains an additional taxon level identical to the singleton level and . Then is a -consistent local maximizer of satisfying

  • the sparsity property as ,

  • asymptotic normality

    where is an matrix such that , a positive semidefinite matrix, and is the Fisher information matrix evaluated at , as .

A simple consequence of Theorem 6.10 is that all estimators obtained with an penalty, , , have the oracle property.

Corollary 6.11

Consider the estimator . Then has the oracle property.

This result follows immediately if we consider a taxonomy in which all taxa are singletons.

Remark 6.12

Alternatively, we can obtain the oracle property from the following modification of the objective function,

where we have replaced the LASSO penalty on by a Bridge penalty with . The -penalty yields an estimator equivalent to that obtain in Theorem 6.10.

Appendix A Appendix: Proofs

In this appendix we provide all proofs.

Proof of Lemma 6.1. Clearly

(A.1)

Let be a local maximizer of . Thus there exists a such that if satisfies , then . By the identity (A.1),

As this holds for in a neighbourhood of , is a local maximizer of .

The converse is similar.


From hereon, we assume , . Before proving Lemma 6.2, we recall the Karush-Kuhn-Tucker (KKT) conditions from convex analysis, here taken from [18].

Definition A.1

Let be a convex subset of . Let be convex functions on for and affine functions on for . Consider the following problem,

We call (P) an ordinary convex program.

Lemma A.2 (KKT Conditions [18])

Let (P) be an ordinary convex program. Let . In order for to be a KKT vector for (P) and an optimal solution to (P), it is necessary and sufficient that be a saddle-point of the Lagrangian of (P). Moreover, this condition holds if and only if and the components of satisfy

  1. , , and ;

  2. ;

  3. , the subgradient of the Lagrangian at .

Proof of Lemma 6.2. Note that the log-likelihood component of the loss function depends on through , so that it remains constant over Without loss of generality, assume , since if a component is negative, we can multiply the variable in question by -1, and if it is zero we can exclude it from the current analysis, as it remains fixed. We further assume since if it is zero, then again, it is fixed.

We have the following optimization program,

Let , , and . Then is equivalent to the following ordinary convex program,

where we have omitted since it is trivially satisfied.

Consider now the Lagrangian of ,

We remark that KKT conditions (i) and (ii) are immediately satisfied, (i) trivially and (ii) by construction. We need only consider (iii). By the Karush-Kuhn-Tucker theorem, is an optimal solution of if and only if , with gradient with respect to , since the subgradient is unique when the function is differentiable.

We have the following derivatives of the Lagrangian ,

(A.2)
(A.3)

Assume is a local minimum for . Then the KKT conditions are satisfied, and we can find . By (iii) the derivatives (A.2), (A.3) are zero, so that we obtain

(A.4)

A simple calculation yields .
Assume for all