1 Introduction
The incorporation of the microbiome as a covariate in response to biological or clinical outcomes is becoming medically and scientifically important especially in understanding etiology and treatment of specific diseases. In addition, this aids in improved understanding of functional pathways such as the gutbrain axis, see for example [22] and [4]. Through sophisticated data capture methods, such as next generation sequencing, researchers are able to rapidly map the complex microbiome [20]. Nevertheless this line of research is challenging due to the shear highdimensional nature of the microbiome, as well as the ultrahigh and small problem. An added complexity is the manner in which the covariates are associated with each other. In terms of the ‘treeoflife’ schematic, bacterial groups at different taxon levels have an associated phylogeny. Recent works incorporating taxonomic information use various methods to identify important taxonomic features, [6], [23], [32], [13], and [24]. Common to all these papers is the incorporation of the underlying microbiology to establish sophisticated computational schemes over and above the already demanding computational methods required to obtain the microbiome data. Due to the complexity of these methods, validation is established mainly through simulations. In this paper we profile a variable selection method that also incorporates the treeoflife schema, similar in scope to the above papers, but different in terms of how the regularization is carried out. Extensive simulations are performed but we also obtain theoretical oracle properties for our method, which to our knowledge, is very novel for microbiome research.
We now provide a summary of the paper. As motivation, Section 2 provides the microbiology background to this paper. Although we are addressing metagenomics through next generation sequencing of the 16S rRNA gene, the methodology is sufficiently rich to incorporate other ‘omics’ structures. In Section 3 we provide the development of our penalization of the phylogenetic tree. An application to an infectious disease is discussed in Section 4. We include a detailed discussion because this infectious disease is currently a very important public health concern, and because it is the main motivation behind the development of the methods in this paper. Further detailed simulation results are presented in Section 5 with comparisons made to other variable selection procedures. This is followed by Section 6 which establishes the main theoretical properties with all of the proofs provided in Appendix A.
2 A microbiology primer
Metagenomics, like the other ‘omics’, bears a structural relationship between covariates. Bacteria exhibit a treelike relationship with each other, often violated via lateral gene transfer. As a result, their systematic taxonomy is constantly in flux; see for instance the paraphyletic Clostridium [28, 17]. Thus their OTU (operational taxonomic unit) proxies, typically the 16S rRNA gene, also exhibit a treelike structure with patterns of cycles at deep taxon classification. Our resolution depends on the length of the 16S rRNA region, the degree of lateral gene transfer, and the reliability of the reads.
In many problems, it is important to develop a means of selecting the OTUs having dominating roles in the microbial systems at hand. The relationship between OTUs is such that some of them may represent the same type of bacteria. Alternatively, some spurious OTUs are artifacts of the laboratory sequencing protocol. Rather than select OTUs on their individual merits, we want to select them based on their group affiliations.
Precisely identifying and characterizing bacteria currently requires genetic and metabolic analysis of clonal colonies in vitro. In studying the composition of microbiomes in vivo, we require a general descriptor. Absent of molecular characterization, there is no general descriptor of bacterial groups, a property which constrained bacterial research for decades. A broader analysis of mixed bacterial communities is available, sacrificing the resolution of clonal studies. This involves the collection and sequencing of genes common to all bacteria, or to the group of interest. This sequencing may be scaled up to generate billions of sequences in tandem. We refer to this as MPS (massively parallel sequencing). For an in depth description from a mathematical perspective, see [19].
In the present paper, we consider MPS data targeting the V3V5 region of the 16S rRNA gene, whose merits we discuss in [9]. Crucially, this is a gene common to all bacteria with sufficient evolutionarily conserved variability to provide a basis for defining bacterial groups [29], albeit not the sole defining descriptor. MPS technologies are currently incapable of providing the full 16S sequence, so a hypervariable region is targeted, of which there are nine. Hypervariable is understood as evolutionarily conserved but more variable than the average over the entire sequence. The various hypervariable regions provide differing resolutions in different bacterial groups. An analysis of the precision of each region for identifying pathogenic bacteria is presented in [3]
. The V3V5 regions provide estimates similar to the full 16S gene in terms of species richness
[30, 33], and it has been shown to provide accurate community structure and low bias estimates of some taxa [1, 26]. A limitation is its poor species level resolution, as found in [3].To facilitate comparison between microbiomes, sequences are grouped into OTUs according to some similarity criterion. This is a datadriven proxy to species delineation; we use OTUs in an operational manner. OTUs consist of sequences which are phylogenetically close. Phylogenetic proximity between OTUs is determined by the interOTU phylogenetic divergence of the constituent sequences. This provides a structural relationship between the OTUs. One manner of presenting this structure is to assign phylotypes to the OTUs, sequencebased consensus taxonomical classification of the clusters. This induces a rooted tree hierarchy between the OTUs. This structure is exploited in the present article.
3 Generalized linear models and the Lasso
In this section, we provide details of what we call the phylogenetic LASSO (least absolute shrinkage and selection operator). As motivation, let us review the hierarchial HLASSO, presented in [34] in the context of penalized leastsquares. Consider the linear model
(3.1) 
for response , covariates
, parameter vector
, and error . Suppose the covariates may be assigned to mutually exclusive groups , . Let be the subvector of whose coefficients correspond to the covariates in , , where denotes set cardinality. We have .We can decompose by where and . Clearly this decomposition is not unique, but this does not ultimately matter, see Lemma 6.2 below. Let , , and define as the mapping given by .
We define the HLASSO estimator of (3.1) via where maximizes the penalized least squares function
where the tuning parameters are fixed, denotes dot product, and is the usual norm for . Here we penalize the groups by the middle term and the individual coefficients by the third term .
It is shown in [34] that the penalty values redistribute geometrically so that we may replace them by a common coefficient . This leads to the result that their regularization is equivalent to maximizing
The HLASSO is thus a nonconcave variant of the group LASSO, [31].
While [34] directly treats least squares, their results generalize readily to arbitrary likelihood functions which possess an attractive oracle property, see Theorem 2, [34]. Below, we provide the generalization to accommodate increased depth to the hierarchy by framing it in terms of a taxonomy.
3.1 The Lasso
Our goal now is to create a hierarchical penalization scheme where there are multiple competing ways of grouping covariates. If these groupings have a nesting property, we can represent this as a tree, otherwise the graphical representation has cycles. In light of the above discussion, we use the convex loglikelihood function in lieu of least squares in the sequel. We frame variable selection in terms of OTU selection in metagenomic analysis, but stress that the method easily accomodates other ‘omic’ data forms, be they from metabolomics, proteomics, and transcriptomics. Further, as life betrays the ability to transmit genetic information horizontally, we incoporate the option to use taxonomies as degenerate as the treeoflife itself. In particular, this allows us to consider overlapping classifications of the variables, extending analysis to, say, active metabolic pathways of microbial systems or human tissues. We introduce some terminology and notation to help bridge the mathematics and systematics at hand.
Definition 3.1
Let be the column vectors of the design matrix . A taxon plural taxa is a subset of the indices and is the corresponding submatrix of . A taxon level is a collection of pairwise disjoint taxa whose union is .
Consider Table 1, below, as an example. We find that taxa and . The collection is a taxon level. In general, we subdivide the indices into taxa at taxon levels. We denote the th taxon of the th taxon level by or .
Index  Phylum  Class  Order  Family  OTU 

1  Actinobacteria  Actinobacteria  Bifidobacteriales  Bifidobacteriaceae  
2  Actinobacteria  Actinobacteria  Bifidobacteriales  Bifidobacteriaceae  
3  Firmicutes  Bacilli  Lactobacillales  Enterococcaceae  
4  Firmicutes  Bacilli  Lactobacillales  Enterococcaceae  
5  Firmicutes  Bacilli  Lactobacillales  Enterococcaceae  
6  Firmicutes  Bacilli  Lactobacillales  Lactobacillaceae  
7  Firmicutes  Bacilli  Lactobacillales  Lactobacillaceae  
8  Firmicutes  Bacilli  Lactobacillales  Lactobacillaceae  
9  Firmicutes  Clostridia  Clostridiale  Clostridiaceae 1  
10  Firmicutes  Clostridia  Clostridiale  Clostridiaceae 1  
11  Firmicutes  Clostridia  Clostridiale  Lachnospiraceae  
12  Firmicutes  Clostridia  Clostridiale  Lachnospiraceae  
13  Firmicutes  Clostridia  Clostridiale  Lachnospiraceae 
Definition 3.2
Suppose we have a collection of taxon levels, where the th taxon level consists of singletons, . We call a taxonomy.
There will be times where we wish to refer to those indices that belong to specific taxa at each level. We have:
Definition 3.3
Let be a tuple where taxon belongs to taxon level . We refer to as a lineage, with associated indices .
We write in lieu of and define . When we wish to make it clear we are referring to a lineage’s taxon at a particular taxon level, we write , so that may be rewritten .
In our example, the lineages may be read directly off Table 1, of which there are five. Figure 1 presents the taxonomy more intuitively. Each branch of the tree represents a taxon while each horizontal row represents a taxon level, numbered 1 through 5. Each lineage is a directed path from the root to a branch.
Consider a generalized linear model with a known link function , so that . We decompose by , where as before, but now , for . We write this decomposition as . Let be the map . This decomposition is illustrated in parentheses in Figure 1 below: the coefficients are recoverd by multiplying the terms in a lineage. We extend the optimization criterion to:
(3.2) 
where as usual is the loglikelihood, and for . The centre double sum is the groups penalty and addresses the increased depth of our taxonomy in contrast to [34] and is the number of taxa in taxon level . When the lineage consists of a single taxon level , this reduces to the criterion in [34].
We appear to suffer an affluence of tuning parameters. Lemma 6.1, below, shows that criterion (3.2) is equivalent to the single tuning parameter criterion
(3.3) 
Let denote the vector of groups coefficients of the th taxon level. Intuitively, as the tuning parameters are redistributed, and change in geometric response due to the relationship . Choosing , , we need only concern ourselves with one tuning parameter.
3.2 LASSO algorithm
The algorithm used to obtain the LASSO estimate relies on iterative adaptive reweighting, deriving from the reformulation of the LASSO in (3.3).
Let be the map from to the unique maximizer of (3.3) over . In this way and may be viewed as projections onto the first and second elements of . Let be a positive collection of tuning parameters . The pseudocode is spelled out below. We remark that convergence is typically achieved quickly, so that the bottlenecks are the weighted LASSO problem and calculation of .
4 Application to Infection
Clostridium difficile (C. difficile) infection (CDI) is the most frequent cause of healthcareassociated infections and its rates are growing in the community [14, 2]. One of the major risk factors for developing CDI is use of antibiotics. The healthy and diverse bacteria which reside within the colon are the major defense against the growth of C. difficile. Antibiotics kill these bacteria and allow C. difficile to multiply, produce toxins and cause disease. The available treatments for this infection are the antibiotics: metronidazole, vancomycin and fidaxomicin. The efficacy of these antibiotics is limited with high recurrence rates, [7].
An alternative to antibiotic therapy for CDI, in particular for recurrent and refractory diseases, is to infuse healthy gut bacteria directly into the colon of infected patients to combat C. difficile by a procedure known as faecal microbiota transplantation (FMT). FMT is a process in which a healthy donor’s stool is administered to an affected patient. This can be performed using a colonoscope, nasogastric tube, or enema. FMT serves to reconstitute the altered colonic flora, in contrast to treatment with antibiotic(s), which can further disrupt the establishment of key microbes essential in preventing recurrent CDI. The literature reveals a cumulative clinical success rate of over 90% in confirmed recurrent CDI cases, [7].
There is a growing interest in the microbiome of CDI patients following an FMT [22, 8, 25, 27, 21]. It is noted that there are vast differences in: the route of administration with all forms covered; donor selection criteria, some used family members, while others used universal donors; sample sizes, although all studies had small sample sizes; and sequencing procedures and equipment. Despite these differences, there are two fundamental points of agreement across all studies. The first is that CDI patients have low diversity in their microbiome, preFMT, and that after receiving an FMT(s), their diversity increased. The second is CDI patients who were successfully cured with FMT undergo changes in their microbiome which initially have similarities to that of their donors, see [22].
We will examine the microbiome data coming from a subset of the CDIpatients treated with FMT by the second author [10], covering the period 2008–2012.
From the 17 selected patients, we note that 13 of these patients responded to a single FMT. We used a taxonomy describing phylum, class, order, family, and genus
. The predictors consisted of the relative abundances of 220 preFMT OTUs and 347 postFMT OTUs. Here we consider logistic regression to model the response for two scenarios. First we wish to know whether the preFMT microbiome could predict a clinical response to an FMT. Second, we wish to know whether the composition of the postFMT microbiome could anticipate the need for additional FMTs. To select the tuning parameter for the
LASSO, we perform leaveoneout crossvalidation (LOOCV). The optimal tuning parameter was selected by AUC (area under the curve) and BS (Brier score). This is the same dataset where phylum interaction was investigated, [15].We are challenged by a sparse predictor matrix as well as a small sample size. Since AUC tends to assign a perfect score to the null model by the way it handles ties, we incorporate BS to compensate for this affect. The results of logistic regression using the preFMT OTUs as covariates are captured in Figure 2. Using the globally optimal BS of 0.208, the corresponding tuning parameter is . Relative to the BS, the locally optimal AUC value is 0.846, with the corresponding tuning parameter of . The globally optimal AUC value is 0.865 which would be associated with the null model. Below the figure in Table 2
, we display the family and genus of the selected OTUs. Using LOOCV, the frequency along with the averaged estimate and LOOCV standard errors are reported. Due to the small sample size the variability in the parameter estimates are large which is to be expected. Concentrating on the frequency using LOOCV we notice that OTU 7 associated with the family
Lactobacillaceae and the corresponding genera Lactobacillus seem to have some positive predictability. This appears to be consistent with some earlier findings that the last author was involved in using a different FMT patient cohort, [22]. We note that some family/genera identification appear to be the same for different OTUs. This in interpreted to mean differences at the species taxon.For the postFMT OTUs, we obtain a globally optimal BS of 0.386, , with corresponding AUC value 0.923, . The globally optimal AUC is 0.961. This is displayed in Figure 3. In Table 3, we display the family and genus of the selected OTUs. Using LOOCV, the frequency along with the averaged estimate and LOOCV standard errors are reported. Again due to the small sample size the variability in the parameter estimates are large. Concentrating on the frequency using LOOCV we notice that OTU 5 associated with the family Enterococcaceae and the corresponding genera Enterococcus, as well as OTU 17 corresponding to the family Bacteroidaceae and genera Bacteroides appear to have some positive predictability. This again is consistent with some earlier results, [22].
We realize that the interpretation of these results from a bacteriological perspective is well beyond the scope of this medium, and the purpose of this section is to present a portrayal of the type of statistical analysis that comes from looking at the microbiome as a covariate. We highlight above some of the more obvious interpretations that is consistent with what has been observed in other studies. Indeed, results of this work will interest researchers and companies working toward refinement of FMT, via establishment of central stool banks or creation of synthetic stool. With results obtained using larger sample sizes it will be possible to select species that come from the genera identification and culture them in a laboratory biochemistry setting. Thus one could strategically pool samples to achieve a desired composition, to supplement stool with specific microorganisms or to be able to prepare recipientspecific synthetic stool as per the RePOOPulate project, [16]. At this point however, we will end this discussion with the comment that a more in depth metageonomic analysis of a recent clinical trial, [11], is under investigation.
Phylogeny  

OTU  Family  Genus  Frequency  Frequency  
1  Enterobacteriaceae  Klebsiella  0.24  10.8 (41.6)  0.35  4.65 (13.7) 
2  Enterobacteriaceae  Escherichia/Shigella  0.59  49.4 (122)  0.88  107 (143) 
3  Streptococcaceae  Streptococcus  0.82  304 (323)  0.76  395 (425) 
4  Lachnospiraceae  Blautia  0.82  239 (281)  0.18  20.4 (66.6) 
5  Enterococcaceae  Enterococcus  0.47  94.9 (137)  0.65  164 (264) 
6  Lactobacillaceae  Lactobacillus  0.53  15.5 (25.2)  0.82  29.8 (29.6) 
7  Lactobacillaceae  Lactobacillus  0.88  349 (335)  0.88  387 (415) 
11  Lachnospiraceae  unclassified  0.53  60.9 (77.5)  0.59  68.6 (85.1) 
13  Veillonellaceae  Veillonella  0.29  78.5 (127)     
24  Veillonellaceae  unclassified  0.65  108 (141)     
26  Veillonellaceae  unclassified      0.47  97.9 (142) 
30  Veillonellaceae  Veillonella  0.76  176 (258)     
33  Veillonellaceae  Veillonella      0.53  55.6 (107) 
Phylogeny  

OTU  Family  Genus  Frequency  Frequency  
1  Enterobacteriaceae  Klebsiella  0.94  7.79 (28.8)  0.82  8.65 (28.9) 
2  Enterobacteriaceae  Escherichia/Shigella  0.82  17.6 (80.7)  0.94  17.1 (80.9) 
3  Streptococcaceae  Streptococcus  0.88  22.6 (42.2)  0.94  52.4 (82.3) 
5  Enterococcaceae  Enterococcus  0.94  98.5 (261)  0.94  147 (275) 
17  Bacteroidaceae  Bacteroides  0.94  109 (334)  0.94  111 (333) 
21  Acidaminococcaceae  Acidaminococcus  0.65  1.34 (1.85)  0.71  15.6 (33.9) 
5 Simulations
In this section, we report on simulations that approximate the covariance structure we may see in practice with respect to phylogenetic proximity of the OTUs. We will consider taxon levels where each level is balanced and grows according to , . In taxonomy language we will consider a balanced taxonomy where we have a single ‘phylum’ () followed by ‘class’ (), ‘order’ (), ‘family’ (), ‘genus’ (), and ‘species’ (). This will then lead to 4096 OTUs ().
To each taxon we introduce taxonwise covariation. This covariance structure is presented in Figure 4 for a single ‘class’ (1024 OTUs) using a 10,000 point sample. The true parameters in the simulation are represented by two classes, with one class dominating the other 3:1.
We consider random Gaussian data generated from a 4096 covariate sparse linear model where 32 covariates have corresponding parameter and the rest being zero. We present the corresponding pruned tree in Figure 5.
The species information is deliberately lost to simulate uncertainty in species assignment to OTUs, which mimics the current technology limitation in 16s rRNA sequencing, see [3]. Thus the deepest taxon level used in the fit is the genus level. We introduce a class, order, family, genus, and specieswise covariation , , , , and
, samples from normal distributions
, , , , , respectively, wheredenotes a normal distribution with mean 0 and variance
. The predictors used are then defined as where is a 4096 multivariate normal distribution with mean vector 0 and covariance matrix , the identity matrix.5.1 Tuning parameter selection
To select the tuning parameter for each pair, we replicate 100 data sets , . We select the tuning parameter minimizing across all models the MSPE (mean squared prediction error) for an independent 10,000 data point validation set generated from the same distribution. We also fit SCAD models using the same datasets, using the same validation set to select the appropriate tuning parameter.
5.2 Performance
To evaluate performance of our models, we consider four measures for 100 replicates: SSE (sum of squared error), MSPE, ‘recall’, and ‘precision’. Recall is defined as the proportion of covariates correctly selected relative to true parameters, tp/(tp+fn), and precision is defined as the proportion of covariates correctly selected relative to total covariates selected, tp/(tp+fp), where tp is true positive, fn is false negative and fp is false positive.
We compare the performance of the
LASSO to SCAD. We also consider OLS (ordinary least squares) for the oracle model where the exact
is known. MSPE for performance is evaluated using a new 10,000 data point validation set generated independently of the tuning validation set.5.3 Results: Tuning
Figure 6 presents recall and precision against the tuning parameters. Also displayed is the MSPE curve, scaled to the unit interval . Note that the selected parameters lie in the middle of a relatively flat MSPE region. Perturbations of
would not overly affect predictive performance. The same holds with respect to SSE (not shown). The shapes of the MSPE curves agree in large measure, aside from the null models, which have lower SSE than the sparsest estimators. Recall and precision are at odds, where for low sample size one must be sacrificed against the other. The selection of
favours higher recall over precision; the coefficients for false positives may be very small and hence affect prediction in a minor way, but false negatives are a complete loss of a structural signal. There is an interesting dip in recall for sample sizes in the region preceding the low, stable MSPE region where the estimators overfit the data. The clear separation of these two regions reflects well on the LASSO stability.5.4 Results: Performance
The results for SSE and MSPE are presented in Table 5. SCAD struggles with the covariance structure, performing poorly in estimation. Incredibly, it appears to perform well in MSPE. While SCAD struggles to recall the correct OTUs due to high correlation within species, to its credit it is able to select some related taxa, where the covariance structure leads to similar predictive performance but more flexibility in fitting the model. The LASSO exhibits none of the SCAD’s difficulties. It quickly converges to agreement with the oracle estimator in estimation and prediction.
The LASSO quickly approaches near perfect recall as sample size increases whereas SCAD becomes stalled at 25%. The LASSO consistently improves in precision, dropping false positives. For SCAD, after an initial improvement in precision, it drops as it selects incorrect but related OTUs. When we present the LASSO and SCAD estimates with a validation set consisting of uncorrelated covariates, there is negligible change in LASSO performance, while the SCAD’s predictive performance matched its poor estimation performance.
5.5 Additional simulations
We comment on some additional simulations. In the low dimensional, , regime with simple samplewise covariation structure, SCAD performs moderately better than the LASSO, with the adaptive LASSO trailing behind. This is consistent across sample size and noise level. For alternative scenarios (), the LASSO performs well, although experiences some difficulty with precision for . This is qualitatively different than the issue with SCAD in the previous section, as recall is nearly perfect. The false positives generally correspond to relatively small coefficients, so that it appears to result from early exit from the fitting algorithm.
SSE  MSPE  

OLS  LASSO  SCAD  OLS  LASSO  SCAD  
50  88.10(46.2)  167.00(62.50)  621.00(230.00)  3.20(1.21)  14.80(17.2)  167.00(162.00) 
100  19.80(6.05)  37.70(12.00)  384.00(5.28)  1.47(0.14)  2.43(0.48)  9.24(0.56) 
150  11.10(3.69)  15.90(6.14)  383.00(4.47)  1.25(0.07)  1.49(0.18)  8.66(0.38) 
200  8.05(2.48)  10.70(4.29)  379.00(4.35)  1.18(0.05)  1.29(0.10)  8.40(0.24) 
250  6.25(1.77)  7.54(2.45)  380.00(3.36)  1.13(0.03)  1.19(0.06)  8.27(0.20) 
Recall  Precision  

LASSO  SCAD  LASSO  SCAD  
50  0.64(0.11)  0.18(0.07)  0.52(0.12)  0.50(0.34) 
100  0.92(0.04)  0.25(0.00)  0.71(0.09)  0.93(0.12) 
150  0.98(0.02)  0.25(0.00)  0.86(0.07)  0.75(0.16) 
200  0.99(0.01)  0.25(0.00)  0.92(0.06)  0.54(0.11) 
250  0.99(0.01)  0.25(0.00)  0.95(0.04)  0.56(0.11) 
6 Theoretical Results
In this section, we consider the general objective function,
where is the known but arbitrary loglikelihood, the remainder is the penalization, and is the norm, . We will also make use of the notation ‘’ for matrix transpose, ‘’, ‘
’ to mean ‘big oh’ and ‘big oh in probability’, respectively,
, to mean ‘little oh’ and ‘little oh in probibility’, respectively, ‘’ to mean the ratio of two sequences converge to a positive constant, ‘’ to mean convergence in distribution, and ‘’ to mean minimum. All proofs are provided in Appendix A.The first result allows us to consider a single penalty parameter, that is, set for all . Let , .
Lemma 6.1 (Equivalence of optimization)
Let , be fixed. Then is a local maximizer of if and only if is a local maximizer of and hence .
The proof is similar to that of Lemma 1 in [34], where leastsquares with regularization is considered.
In the sequel we assume that for all . This next result provides a relationship between the individual effects parameters and the group coefficients. Intuitively, the mass distributes itself geometrically to minimize the penalty terms.
Lemma 6.2 (Mass Equilibrium)
Let . Then is a global maximizer of over if and only if .
This conservation of mass result immediately leads to the following.
Corollary 6.3
The relationship from Lemma 6.2 identifies the unique maximizer of over .
Thus is an order polynomial in terms of .
Definition 6.4
Let be the map where is the unique optimizer of . We refer to as the partial inverse.
Corollary 6.5
Suppose the taxonomy has two levels, . Then the partial inverse is characterized by: if then and , if then and .
6.1 LAN conditions and the Oracle
To ensure local asymptotic normality of the MLE for diverging number of parameters, we adopt the regularity conditions from [12] as in [34]:

For all , the observations , , are independently and identically distributed according to the density , where has common support and the model is identifiable. Further,

The Fisher information matrix is positive definite with bounds
where is the point spectrum of a matrix ,

There exists an open subset , the number of parameters, containing the true parameter point such that for almost all , is defined for all . Further, there exist functions such that
for all and
We call the conditions (A1)(A3) as the LAN conditions and note that they are not excessively restrictive and falls within the usual framework of this type of analysis.
Consider the parameters where grow with , and their partial inverse . Let if , otherwise. Then assuming the relationship from Lemma 6.2 we have the following equivalent expression,
(6.1) 
For convenience we will write for the projection of onto .
We have the following consistency results for the LASSO.
Theorem 6.6 (Consistency)
Assume that the distribution satisfies the LAN conditions. If and , then there exists a consistent local maximizer of , where .
A simple choice is We find this provides us with taxon selection consistency.
Define by if and 1 otherwise. Let .
Definition 6.7
Let be an estimator for . Then is said to be consistent in group selection if as . Moreover, if for all , the estimator is model selection consistent.
Consider the true parameter (estimator) in . Suppose and define to be the restriction of to the set .
Definition 6.8
Let be a model selection consistent estimator of Suppose satisfies
where is an matrix such that , a positive semidefinite matrix, and is the Fisher information evaluated at , as . Then is said to have the oracle property.
We have the following.
Theorem 6.9 (Taxon selection consistency)
Assume that the distribution satisfies the LAN conditions. If and , then there exists a consistent local maximizer of so that , the group sparsity property.
For the case of two taxon levels, , the above result is Theorem 2, [34]
. Using an adaptive modification to their loss function analogous to
[35], [34] obtains the full oracle property. We obtain the oracle property in an alternative manner, by a simple modification of the taxonomy of the variables. The full proofs are quite involved and are relegated to Appendix A. Since we have now shown that consistent taxon selection holds for arbitrary number of taxon levels, we have the full oracle property in the following result using a simple modification of our taxonomy. This is possible as we are able to incorporate , arbitrary, taxon levels.Theorem 6.10 (Oracle Property)
In addition to the LAN conditions, suppose that the taxonomy contains an additional taxon level identical to the singleton level and . Then is a consistent local maximizer of satisfying

the sparsity property as ,

asymptotic normality
where is an matrix such that , a positive semidefinite matrix, and is the Fisher information matrix evaluated at , as .
A simple consequence of Theorem 6.10 is that all estimators obtained with an penalty, , , have the oracle property.
Corollary 6.11
Consider the estimator . Then has the oracle property.
This result follows immediately if we consider a taxonomy in which all taxa are singletons.
Remark 6.12
Alternatively, we can obtain the oracle property from the following modification of the objective function,
where we have replaced the LASSO penalty on by a Bridge penalty with . The penalty yields an estimator equivalent to that obtain in Theorem 6.10.
Appendix A Appendix: Proofs
In this appendix we provide all proofs.
Proof of Lemma 6.1. Clearly
(A.1) 
Let be a local maximizer of . Thus there exists a such that if satisfies , then . By the identity (A.1),
As this holds for in a neighbourhood of , is a local maximizer of .
The converse is similar.
From hereon, we assume , . Before proving Lemma 6.2, we recall the KarushKuhnTucker (KKT) conditions from convex analysis, here taken from [18].
Definition A.1
Let be a convex subset of . Let be convex functions on for and affine functions on for . Consider the following problem,
We call (P) an ordinary convex program.
Lemma A.2 (KKT Conditions [18])
Let (P) be an ordinary convex program. Let . In order for to be a KKT vector for (P) and an optimal solution to (P), it is necessary and sufficient that be a saddlepoint of the Lagrangian of (P). Moreover, this condition holds if and only if and the components of satisfy

, , and ;

;

, the subgradient of the Lagrangian at .
Proof of Lemma 6.2. Note that the loglikelihood component of the loss function depends on through , so that it remains constant over Without loss of generality, assume , since if a component is negative, we can multiply the variable in question by 1, and if it is zero we can exclude it from the current analysis, as it remains fixed. We further assume since if it is zero, then again, it is fixed.
We have the following optimization program,
Let , , and . Then is equivalent to the following ordinary convex program,
where we have omitted since it is trivially satisfied.
Consider now the Lagrangian of ,
We remark that KKT conditions (i) and (ii) are immediately satisfied, (i) trivially and (ii) by construction. We need only consider (iii). By the KarushKuhnTucker theorem, is an optimal solution of if and only if , with gradient with respect to , since the subgradient is unique when the function is differentiable.
Comments
There are no comments yet.