# High-dimensional Interactions Detection with Sparse Principal Hessian Matrix

In statistical methods, interactions are the contributions from the products of the predictor variables to the response variable. In high-dimensional problems, detecting interactions is challenging due to the combinatorial complexity in conjunction with limited data information. We consider detecting interactions by exploring their connections with the principal Hessian matrix. To handle the challenging high dimensionality, a one-step synthetic approach is proposed to estimate the principal Hessian matrix by a penalized M-estimator. An alternating direction method of multipliers (ADMM) is proposed to efficiently solve the encountered regularized optimizations. Based on the sparse estimator, we propose a new method to detect the interactions by identifying the nonzero components. Our method directly targets at the interactions, and it requires no structural assumption on the hierarchy between the interactions effects and those from the original predictor variables. We show that our estimator is theoretically valid, computationally efficient, and practically useful for detecting the interactions in a broad spectrum of scenarios.

## Authors

• 10 publications
• 12 publications
• 4 publications
07/30/2020

### Impulse Response Analysis for Sparse High-Dimensional Time Series

We consider structural impulse response analysis for sparse high-dimensi...
02/13/2019

### Do Subsampled Newton Methods Work for High-Dimensional Data?

Subsampled Newton methods approximate Hessian matrices through subsampli...
02/23/2012

### Optimal detection of sparse principal components in high dimension

We perform a finite sample analysis of the detection levels for sparse p...
10/01/2015

### QUDA: A Direct Approach for Sparse Quadratic Discriminant Analysis

Quadratic discriminant analysis (QDA) is a standard tool for classificat...
05/18/2021

### High-Dimensional Sparse Single-Index Regression Via Hilbert-Schmidt Independence Criterion

Hilbert-Schmidt Independence Criterion (HSIC) has recently been used in ...
02/10/2019

### BOLT-SSI: A Statistical Approach to Screening Interaction Effects for Ultra-High Dimensional Data

Detecting interaction effects is a crucial step in various applications....
01/24/2019

### Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians

We consider deep classifying neural networks. We expose a structure in t...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The interaction effect is an important consideration in regression problems that are commonly encountered in machine learning. It refers to a general situation when it is not adequate to build a model with the original explanatory variables alone in a simple additive way – referred to as the main effects. At the same time, it could be more effective to build a model incorporating constructed variables as the products between pairs of variables – referred to as the interactions between the variables; see, for example, the monograph of

Jaccard and Turrisi (2003) for an overview. Since the interactions are constructed from the predictor variables, it is clear that conceptually they can be further expanded in a hierarchical manner. That is, the construction can continue searching for higher order multiple way new interactions that can be defined based on all existing variables including those constructed ones from the products.

Recently, handling high-dimensional regression problems has received increasing attentions. For an overview of the current state of knowledge, we refer to the monographs Hastie et al. (2015), Bühlmann and van de Geer (2011), and the review by Fan and Lv (2010) and the references therein. Conceptually speaking, existing penalized regression methods can be applied by incorporating the constructed interactions as new variables. However, because of the aforementioned interactive and hierarchical nature, the complexity of the problem scales up very quickly, both computationally for practical implementations, and theoretically for analyzing the properties of the estimations. For example, in a typical situation with predictor variables, the number of the possible two-way interactions is , growing at a polynomial rate of that itself is typically assumed growing at some exponential rate of the sample size . Consequently, on one hand, with the total number of predictor variables including interactions at the order of , the computational complexity increases substantially for the statistical methods. On the other hand, due to the increased data dimensionality, the relative signal strength from the predictor variables becomes weaker.

Recently, there have been active developments on methods for detecting and estimating the interactions in high-dimensional regression problems. To manage the complexity of the problem, a main strategy is to assume some hierarchical structure between the main effect from the original predictor variables and the interactions. That is, in the absence of the original variables, the interactions alone do not play a role in the contributions to the model. This is referred as the marginality principle (Nelder, 1977). In a class of the existing methods guided by this principle, the interactions are detected and estimated with some two-step or iterative two-step procedures that first select the main effects with some penalized regression methods, and then performs a second run of selections by incorporating all potential interactions from the selected variables. For examples of the methods belonging to this class, we refer to Hao and Zhang (2014), Hao et al. (2016), Yuan et al. (2009), Zhao et al. (2009), Choi et al. (2010), Bien et al. (2013), Shah (2016), and Haris et al. (2016). More recently, She et al. (2018) investigate the problem with some group penalty approach incorporating the structural hierarchy. Requiring no hierarchical structure and in a setting with multiple response, Kong et al. (2017) propose to search for interactions with some sure screening procedure based on the square functions of the distance correlation measures. Their approach is computationally efficient, and only requires ranking with pairwise quantities. From the inverse modeling perspective, Jiang and Liu (2014) consider testing procedures with the sliced inversion regression approach (Li, 1991) for variable selection with the first- and second-order effects. For classification problems and general index models, Li and Liu (2017) study some information criteria based approaches for variable and interaction selections. Recently, Fan et al. (2016) propose a method to detect the signals from the interactions by examining the pairwise covariances between the squared response and each predictor variable. Upon evaluating the covariances, Fan et al. (2016) propose a new two-step procedure that first conduct sure screening for the interactions, and then conduct a second step of penalized regression without requiring hierarchical structure.

For those two-step procedures, detecting interactions in the second step upon selecting some variables such as the main effect has a clear limitation in the high-dimensional scenarios. Particularly for the ones with some hierarchical structures, if the signal strength relatively to the noise from the main effect is weak, then penalized regression in the first step may miss the main effect. Subsequently, it becomes impossible to detect and estimate the interactions in the second step. This is more likely to happen for high-dimensional problems, when the effect from the noise aggregation becomes more substantial, causing relatively low signal strength.

In this paper, we propose a new method to detect the interaction effects in regression problems with a one-step penalized M-estimation. Our method does not assume a hierarchical structure, and it also does not require a screening step. Our method is developed by utilizing the so-called principal Hessian matrix (Li, 1992)

, which is defined as the expectation of the second order derivative of the mean function. To solve the challenging problem, we assume that the principal Hessian matrix is sparse, reflecting the reality that given the limited data information, only few meaningful interactions can be supported with good estimation accuracy. A sparsity encouraging M-estimator is proposed by minimizing a dedicated crafted penalized squared loss function, and then an alternating direction method of multipliers (ADMM) algorithm is proposed to solve the optimizations efficiently. We show with simulations that the proposed method outperforms existing methods, and our theory confirms that the estimator works satisfactorily, allowing the dimensionality of the predictor variables growing exponentially with the sample size.

The rest of this paper is organized as follows. The proposed method using the principal Hessian matrix for detecting interactions is outlined in Section 2, followed by the alternating direction method of multipliers (ADMM) for solving the optimizations elaborated in Section 3. Numerical examples including simulation and a real data example are presented in Section 4 to demonstrate the promising performance of the method. Some theoretical analysis is given in Section 5, and the proofs are outlined in the Appendix.

## 2 Methods

### 2.1 Principal Hessian Matrix

The key device in our method is the principal Hessian matrix. In Li (1992)

, the principal Hessian matrix is proposed as a convenient device for investigating dimension reduction and data visualization; see also

Cook (1998).

We consider the regression problem with the response variable and predictor variable

that is taken to be a random vector. Let

be the conditional mean function, then a model for can be written as with

being a zero mean random variable independent of

. Then the principal Hessian matrix introduced in Li (1992) for the mean function is defined as

where the expectation is taken with respect to the joint distribution of

. For ease of presentation and without loss of generality, we consider that is centered so that all of its components are of zero mean. For demonstrating the connection of the principal Hessian matrix with the interactions, let us consider the following working model with both the main and interaction effects:

 Y=m(X)=p∑i=1θiXi+p∑i=1p∑j=iβijXiXj+ε, (1)

where and is independent of . Then by letting , we note that from (1)

 ψij=ψji=E{∂2m(X)∂Xi∂Xj}=βij. (2)

That is, since is the expectation of the second order derivative of , it automatically filters out the linear contributions from . Hence, this is the intuition that the principal Hessian matrix is a very informative device for detecting interactions.

By an application of the Stein’s Lemma (Stein, 1981), Li (1992) shows that if , then

 Ψ=Σ−1XΣYXXΣ−1X, (3)

where . The connection between (3) and (2) based on model (1) suggests that (3) is an ideal device for interactions detection. The identity (3) is the foundation of our work. Hereinafter, the principal Hessian matrix we refer to is (3). The setting is reasonable for solving the challenging problem of large scale interactions detection.

In the ideal case that , then , and provided that . The condition is satisfied by the normal and elliptical symmetric distributions. Moreover, if there is no linear contribution from , or as the main effect it has been estimated and removed from , is not required for detecting the interactions. That is, consistent estimation of the main effects helps alleviating the conditions on the distribution of .

Rigorously, we have the following propositions connecting in (3) with the interaction effects in the regression model (1).

###### Proposition 1.

If , then under model (1), if and only if .

Furthermore, the normal distribution assumed in Proposition (

1) can be more broadly generalized with the following conditions:

###### Condition 1.

satisfies

1. [align=left]

2. for any ;

3. Let . It holds that if and only if or .

###### Proposition 2.

Under Condition 1 and model (1), if and only if .

Condition 1

is clearly valid for the normal distribution, and it remains more broadly true. For example, for the elliptical symmetric distributions whose characteristic function takes the form

, Condition 1 can be verified. Other practically appealing situations for Condition 1 include when contains independent components, and/or there are only few in (1) being nonzero. We note that the essential role of C1 is to ensure that the linear contribution from in is not interfering with that from the interactions. Condition C1 is not required if the linear contribution is estimated and removed before performing interaction detections. Existing high-dimensional penalized regression methods, e.g., those in Bühlmann and van de Geer (2011), Fan and Lv (2010), and Hastie et al. (2015) can be performed for such a purpose. Additionally, C2 of Condition 1 is on the correlations between products and . It is not restrictive by observing that , implying zero correlations between and for .

We also note that by its definition,

is suitable for studying interactions between numerical predictor variables. For categorical variables, dummy variable coding is needed, resulting in sub-groups of the observations so that our methods and other penalized regression analysis can be equally applied within the resulting groups for detecting the interactions between numerical and categorical variables.

### 2.2 Sparse Estimation of Ψ

Our study intends to explore an effective estimator for

with high-dimensional data, and then to detect the interactions. In high-dimensional regression problems, sparsity is a useful notion for statistical inferences; see, among others,

Hastie et al. (2015). In the context of interactions in , it means that the total number of contributing pairs of the predictor variables is small, so that is sparse. Thus, exploring the interactions becomes a sparse matrix estimation problem.

By examining in (3), we observe that a sparse estimation of this matrix is challenging. First, estimating is difficult when is large. Though existing methods can be applied, e.g., those reviewed in Fan et al. (2016), there are two major concerns. First is on whether or not it is necessary to impose assumptions such as being sparse. Second is that if one construct an estimator of from sparse estimators of and , the implied sparse components of may not be the desirable ones in . As an example, we consider , , with , and . Then we have

 Σ−1X=11−ρ2⎛⎜⎝1−ρ0−ρ1+ρ2−ρ0−ρ1⎞⎟⎠ and ΣYXX=⎛⎜⎝2ρ1+ρ2ρ+ρ31+ρ22ρ2ρ2ρ+ρ32ρ22ρ3⎞⎟⎠.

Then, some algebra shows that in this case the principal Hessian matrix

 Ψ=⎛⎜⎝010100000⎞⎟⎠.

That is, the precisely reflects the signal from the interactions, despite that both and can be some dense matrices.

In our investigation, we consider the case that is sparse, with no explicit requirements on , , and All our technical development is based on assumptions directly on . Specifically, we aim at a one-step sparse estimator of . The key development is the following. First, we observe that from (3), . This motivates to consider the weighted quadratic loss

 ℓ(Ψ)=tr{Σ−1X(ΣXΨΣX−ΣYXX)TΣ−1X(ΣXΨΣX−ΣYXX)}, (4)

which is minimized at . With some elementary algebra, it can be shown that depends on only through

 tr(ΨTΣXΨΣX)−2tr(ΨΣYXX).

We denote by the observed data. Then, we propose to replace and by their sample counter parts , with , and applying the sparse inducing penalty function. Concretely, we propose an M-estimator:

 ˆΨ=argminΨ{12tr(ΨTSΨS)−tr(ΨQ)+λ∥Ψ∥1}, (5)

where is the norm of the matrix , and is a tuning parameter. With , we propose to detect interactions as

 M={(i,j):ψij≠0,i≤j}. (6)

We note that estimator may not symmetric. For practical applications, we suggest symmetrizing it by . Because only linear operators are involved in (5) with no inverse of a large matrix, the calculation of the loss is computationally efficient and scalable. In the next section, we design an efficient algorithm to compute the estimator efficiently.

## 3 Algorithm

### 3.1 Alternating Direction Method of Multipliers

We first observe that the Hessian of the problem (5) is and positive-semidefinite, where denotes the Kronecker product. The loss function in (5) is thus convex. We propose to solve (5) by the alternating direction method of multipliers (ADMM). Our algorithm is inspired by the algorithm developed in Jiang et al. (2016), in which a large matrix for classification with quadratic discriminant analysis is directly estimated in the same spirit of (5).

ADMM was first proposed in Glowinski and Marroco (1975), which is essentially a splitting version of the augmented Lagrangian method to solve optimization problems with a separable objective under a linear constraint:

 minβ,η∈Rdg1(β)+g2(η), subject to Aβ+Bη=b,

where and , and are continuous functions. Recently, ADMM finds wide applications in different fields such as statistics, image processing and machine learning. This is due to the algorithm’s easy implementation and practical efficiency; see Boyd et al. (2011) and Eckstein and Yao (2012) for some recent reviews.

To apply the ADMM algorithm to compute our estimator, we rewrite problem (5) into the following equivalent form to facilitate the algorithm design that

 minΨ,Φ12tr(ΨTSΨS)−tr(ΨQ)+λ∥Φ∥1, subject to Ψ−Φ=0. (7)

The augmented Lagrangian dual problem associated with the above problem is

 Lρ(Ψ,Φ,Λ)=12tr(ΨTSΨS)−tr(ΨQ)+λ∥Φ∥1+⟨Λ,Ψ−Φ⟩+ρ2∥Ψ−Φ∥2F,

where is the dual variable associated with the equality constraint, and is a penalty parameter. The ADMM algorithm runs iteratively, at the -th iteration, we update the solutions by

 Ψt+1 =argminΨLρ(Ψ,Φt,Λt), (8) Φt+1 =argminΦLρ(Ψt+1,Φ,Λt), Λt+1 =Λt+ρ(Ψt+1−Φt+1).

Note that since our problem is convex, and the objective value is lower bounded, the convergence result of the algorithm has been well established in existing literature; see Fang et al. (2015) and Sun et al. (2015) for examples. We further point out that in our simulation studies in Section 4, the algorithm performs well empirically.

### 3.2 Solving the Subproblems

The key to implement the ADMM algorithm developed above efficiently is to solve the - and -subproblems in (8) efficiently. In this subsection, we derive efficient solutions for the two subproblems. For the -subproblem, we have that

 Ψt+1= argminΨ12tr(ΨTSΨS)−tr(ΨQ)+tr⟨Λt,Ψ⟩+ρ2∥Ψ−Φt∥2F = argminΨ12vec(SΨ)Tvec(SΨ)−vec(Q)Tvec(Ψ)+vec(Λt)Tvec(Ψ)+ρ2∥vec(Ψ)−% vec(Φt)∥22 = argminΨ12vec(Ψ)T{(S⊗Ip)T(S⊗Ip)}vec(Ψ)−vec(Q)Tvec(Ψ)+vec(Λt)Tvec(Ψ) +ρ2∥vec(Ψ)−vec(Φt)∥22,

where denotes Kroneker product. By Fang et al. (2015), we add a proximal term that we let

 Ψt+1= argminΨ12vec(Ψ)T{(S⊗Ip)T(S⊗Ip)}vec(Ψ)−vec(Q)Tvec(Ψ)+vec(Λt)Tvec(Ψ) +ρ2∥vec(Ψ)−vec(Φt)∥22+12∥vec(Ψ)−vec(Ψt)∥2G,

where the matrix induced norm for positive definite matrix . Letting , where

is greater than the largest eigenvalue of

, we have

 Ψt+1= argminΨρ+τ2∥vec(Ψ)∥22−{vec(Q)−vec(Λt)+(ρ+τ)%vec(Φt)−(S⊗Ip)vec(Φt)}.

Thus, we have

 vec(Ψt+1)={vec(Q)−vec(Λt)+(ρ+τ)vec(Φt)−(S⊗Ip)vec(Φt)}/(ρ+τ).

We point out that the most computational expensive step is to compute the Kronecker product , but this only needs to be once at the beginning.

Next, considering the -subproblem, we have

 Φt+1=argminΦLρ(Ψt+1,Φ,Λt)=argminΦρ2tr(ΦTΦ)−ρtr(ΦTΨt+1)−tr(ΦTΛt)+λ∥Φ∥1.

It is not difficult to see that this step admits a closed-form solution by soft thresholding:

 Φt+1=shrinkage(Ψt+1+ρ−1Λt,ρ−1λ),

where is the elementwise shrinkage operator that for a matrix and , the -th entry .

For the stopping criterion, we look at the primal and dual residuals. The primal residual is a measure of the feasibility for problem (7), which is defined as

 ηtP=∥∥Ψt−Φt∥∥F.

Meanwhile, the dual residual measures the convergence of the algorithm, where we take

 ηtD=max{∥Ψt+1−Ψt∥F,∥Φt+1−Φt∥F,∥Λt+1−Λt∥F}.

In our implementation, we stop the algorithm when both primal and dual residuals are small that

 max{ηtP,ηtD}≤10−3.

We summarize the pseudo-code algorithm in Algorithm 1. As we have derived the closed-form solutions for the - and -subproblems in (8), which can be easily computed, Algorithm 1 can be implemented efficiently.

## 4 Numerical Studies

### 4.1 Synthetic Data

In this section, we conduct extensive numerical studies to demonstrate and validate the performance of our proposed method in practice. We first conduct the investigation using synthetic data. In our simulation setup, we fix the sample size as , and we consider different dimensions for , 200 and 300. Meanwhile, we generate the design matrix by generating each sample independently from a

-dimensional Gaussian distribution

, where the covariance matrix

is either the identity matrix, or a Toeplitz matrix, i.e.

for some . We then generate the noises ’s independently from a normal random variable , and we consider different ’s. To thoroughly compare the proposed method with other methods, we consider the following eight models:

1. [align=left]

2. ,

3. ,

4. ,

5. ,

6. ,

7. ,

8. ,

In the Model 1, we consider the case where only the main effects are present. This is a benchmarking case in the sense that there should be no false inclusion of the interactions for a valid detection method. In the next four four models, we consider the cases where only interaction terms are presented under different scenarios. With the interaction only models, we intend to show the advantage of the proposed one-step procedure. Then, we consider two models where some hierarchical structures exist in Model 6 and Model 7 with different signal strength. Finally, we consider an example with the heteroscedasticity in Model 8, in which the conditional variance of the response variable is not homogeneous.

We choose the tuning parameter using 10-fold cross-validation, and we point out that after extensive numerical investigations, we find that our method is insensitive to the tuning parameter selection. We report the empirical true positive rate (TPR) and the false positive rate (FPR) by (6) for each data generating scheme after repeating each scheme 200 times. In particular, let be the true set of interaction terms, and let be the interaction terms selected by the estimator . TPR and FPR are defined as

 TPR=|I∩ˆI||I|,% and  FPR=|ˆI∖I|(d2)+d−|I|

respectively.

We compare our method with the interaction pursuit (IP) method (Fan et al., 2016), which is two-step method with a screening procedure as the first step, and the RAMP method (Hao et al., 2018), which is based on a hierarchical structure. We report the results in Tables 1-4, each containing results of two of the above models.

Firstly, as seen in Table 1 for Model 1, our one-step method performs very well with very little false inclusion when no interactions are present. Other competing methods also are performing very well in the sense of little of no false inclusions. In all other models for detecting the contributing interactions, we see that the proposed method outperforms the other two method with large margins in all settings except Model 6, where a hierarchical structure is presented with good signal strength, which favors the RAMP method when the main effect is correctly detected first. Nevertheless, we see that in Model 7, though a hierarchical model is presented, due to the low signal strength of the main effect of and , our method still outperforms the RAMP method substantially. This clearly demonstrates the advantage of our one-step estimation based interaction detection method.

We further apply the proposed method to analyze the GPL96 microarray dataset analyzed in McCall et al. (2010); Wu et al. (2013); Fang et al. (2017). This dataset contains samples of more than 2,000 biological contexts generated from Affymetrix Human 133A (GPL96) arrays, and each sample has genes. The data set was preprocessed and normalized using frozen-RMA (McCall et al., 2010) to reduce batch effects. We use samples of breast tumor and normal samples in the analysis. To improve the efficiency in our demonstration, we conduct some pre-screening. In particular, we use a plug-in estimator to estimate , where denotes the Moore-Penrose pseudo-inverse of . We then screen out the variables where the -norm of the corresponding columns of are small. In our analysis, we screen the genes and keep genes.

We treat the disease status as responses, and randomly split the dataset into a training set and a testing set. Each training set contains 100 samples from the breast tumor group and 150 samples from the normal group. We repeat the random split 100 times. We first report the averaged testing errors and their standard errors in Table

5. It is seen that the proposed method achieves better testing errors than the other two methods. We further provide the five most selected interaction terms in Table 6. It is seen that BRCA1 and BRCA2 are frequently selected by our proposed method including the interaction between these two genes, while they are less frequently selected by the other two methods. It is well known in literature that BRCA1 and BRCA2 are of fundamental importance for breast tumor as shown in King et al. (2003), This shows the promising applicability of our method in practice.

## 5 Some Theoretical Analysis

### 5.1 Non-asymptotic Results

We now perform some theoretical analysis of the estimator (5). Outline of the proofs are provided in the Appendix of the paper. Denote by the unknown truth of the principal Hessian matrix, and the support of , and be the cardinality of . Denote by the max-norm of the matrix .

The following lemma establishes a non-asymptotic result on the support of .

###### Lemma 1.

Let be the cone depending on the truth . We have that provided that the tuning parameter satisfies .

We also have the following lemma containing a non-asymptotic oracle’s inequality.

###### Lemma 2.

Let If , then

 2tr(ˆΔSˆΔS)+λ∥ˆΨU−Ψ∗U∥1+λ∥ˆΨUc∥1≤4λ∥ˆΨU−Ψ∗U∥.

To establish an estimation error bound of , we need the following condition.

###### Condition 2.

(Restricted eigenvalue condition) Let . For all ,

 vecT(Δ)(S⊗S)vec(Δ)≥γ∥Δ∥22. (9)

Then we have the following theorem, establishing a key non-asymptotic result on the estimating error bound of .

###### Theorem 1.

Suppose that the restricted eigenvalue Condition 2 holds. If , then .

The main message of Theorem 1 is the good performance non-asymptotically in the sense of small error bound. The restricted eigenvalue Condition 2 essentially requires that on the set , the smallest eigenvalue of is strictly bounded away from 0. We remark that the condition (9) here is not essentially stronger that the analogous one with high-dimensional linear model for sparse estimator of as in Hastie et al. (2015) and Bühlmann and van de Geer (2011), requiring that for all in an analogous restricted set depending on the unknown truth . Then by observing the fact that that smallest eigenvalue of is the square of the smallest eigenvalue of , we see that Condition (2) is essentially of the same kind.

### 5.2 Asymptotic Results

The restricted eigenvalue condition (9) is known important for establishing the error bounds of penalized estimators in regression problems; see, among others, Negahban et al. (2012) and Hastie et al. (2015). Since our estimator (5) is the minimizer of the sum a quadratic loss and penalty, this type of the restricted eigenvalue condition is expected. It is known in the literature that in an asymptotic setting assuming that the entries of following some distributions satisfying some condition on the tail probabilistic behavior, then the condition (9

) holds with probability tending to 1. We refer to the discussion in

Negahban et al. (2012) and results in Raskutti et al. (2010), Raskutti et al. (2011), and Rudelson and Zhou (2011).

As in Lemma 1 and Theorem 1, the condition imposes the requirement on the tuning parameter . By their definitions, is an estimator of , and is an estimator of . As in an asymptotic setting with and , they all converge to the truths element-wise in probability under appropriate conditions on the tail distribution of and . Thus, the tuning parameter is allowed going to as , ensuring that , and in probability, i.e. consistency of .

Formally, for showing the asymptotic properties of , we impose the following condition.

###### Condition 3.

The random vectors are independent and identically distributed. The largest eigenvalue of the is strictly bounded away from infinity. Both and satisfy exponential tail property, i.e., there exist constants , , and such that

 P(|Yi|>t)≤exp{−(t/b1)c1} and P(|Xij|>t)≤exp{−(t/b2)c2}.

Condition 3 ensures the concentration property of . So that by choosing , holds with probability at least when with some constants .

Then we have the following asymptotic result on the error bound of .

###### Theorem 2.

Suppose that is sparse with support , and the cardinality . Suppose additionally that , . Then under Condition 3, by choosing , .

To ensure that zero components of are correctly estimated as zero by , we need the following irrepresentable condition.

###### Condition 4.

(Irrepresentable condition) Let with and corresponding to the partition of and . Then for some , where is the -th column of .

###### Theorem 3.

By choosing and assuming Conditions 3 and 4, with probability tending to 1. Furthermore, if , then is bounded away from zero with probability tending to 1.

With Theorem 2 showing the consistence of , and Theorem 3 showing the correctness in identifying the nonzero component in , we demonstrate the validity of the our method in detecting the interactions.

## References

• Bickel and Levina (2008) Bickel, P. J. and Levina, E. (2008). Regularized estimation of large covariance matrices. Annals of Statistics, 36:199–227.
• Bien et al. (2013) Bien, J., Taylor, J., and Tibshirani, R. (2013). A lasso for hierarchical interactions. The Annals of Statistics, 41(3):1111–1141.
• Boyd et al. (2011) Boyd, S., Parikh, N., Chu, E., Peleato, B., and Eckstein, J. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–122.
• Bühlmann and van de Geer (2011) Bühlmann, P. and van de Geer, S. (2011). Statistics for High-dimensional Data: Methods, Theory and Applications. New York: Springer.
• Choi et al. (2010) Choi, N. H., Li, W., and Zhu, J. (2010). Variable selection with the strong heredity constraint and its oracle property. Journal of the American Statistical Association, 105(489):354–364.
• Cook (1998) Cook, R. D. (1998). Principal hessian directions revisited. Journal of the American Statistical Association, 93(441):84–94.
• Eckstein and Yao (2012) Eckstein, J. and Yao, W. (2012). Augmented lagrangian and alternating direction methods for convex optimization: A tutorial and some illustrative computational results. RUTCOR Research Reports, 32:3.
• Fan and Lv (2010) Fan, J. and Lv, J. (2010). A selective overview of variable selection in high dimensional feature space. Statistica Sinica, 20:101–148.
• Fan et al. (2016) Fan, Y., Kong, Y., Li, D., and Lv, J. (2016). Interaction pursuit with feature screening and selection. Manuscript. arXiv:1605.08933.
• Fang et al. (2015) Fang, E. X., He, B., Liu, H., and Yuan, X. (2015). Generalized alternating direction method of multipliers: new theoretical insights and applications. Mathematical programming computation, 7(2):149–187.
• Fang et al. (2017) Fang, E. X., Li, M.-D., Jordan, M. I., and Liu, H. (2017). Mining massive amounts of genomic data: a semiparametric topic modeling approach. Journal of the American Statistical Association, 112(519):921–932.
• Glowinski and Marroco (1975) Glowinski, R. and Marroco, A. (1975). Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires. Revue française d’automatique, informatique, recherche opérationnelle. Analyse numérique, 9(R2):41–76.
• Hao et al. (2016) Hao, N., Feng, Y., and Zhang, H. H. (2016). Model selection for high-dimensional quadratic regression via regularization. Journal of the American Statistical Association, pages 1–11.
• Hao et al. (2018) Hao, N., Feng, Y., and Zhang, H. H. (2018). Model selection for high-dimensional quadratic regression via regularization. Journal of the American Statistical Association, pages 1–11.
• Hao and Zhang (2014) Hao, N. and Zhang, H. H. (2014). Interaction screening for ultrahigh-dimensional data. Journal of the American Statistical Association, 109(507):1285–1301.
• Haris et al. (2016) Haris, A., Witten, D., and Simon, N. (2016). Convex modeling of interactions with strong heredity. Journal of Computational and Graphical Statistics, 25(4):981–1004.
• Hastie et al. (2015) Hastie, T., Tibshirani, R., and Wainwright, M. (2015). Statistical Learning with Sparsity: The Lasso and Generalizations. Taylor & Francis Inc.
• Jaccard and Turrisi (2003) Jaccard, J. J. and Turrisi, R. (2003). Interaction Effects in Multiple Regression. SAGE PUBN.
• Jiang and Liu (2014) Jiang, B. and Liu, J. S. (2014). Variable selection for general index models via sliced inverse regression. The Annals of Statistics, 42(5):1751–1786.
• Jiang et al. (2016) Jiang, B., Wang, X., and Leng, C. (2016). Quda: A direct approach for sparse quadratic discriminant analysis. Manuscript. arXiv:1510.00084.
• King et al. (2003) King, M.-C., Marks, J. H., Mandell, J. B., et al. (2003). Breast and ovarian cancer risks due to inherited mutations in brca1 and brca2. Science, 302(5645):643–646.
• Kong et al. (2017) Kong, Y., Li, D., Fan, Y., and Lv, J. (2017). Interaction pursuit in high-dimensional multi-response regression via distance correlation. The Annals of Statistics, 45(2):897–922.
• Li (1991) Li, K.-C. (1991). Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86(414):316–327.
• Li (1992) Li, K.-C. (1992). On principal hessian directions for data visualization and dimension reduction: Another application of stein’s lemma. Journal of the American Statistical Association, 87(420):1025–1039.
• Li and Liu (2017) Li, Y. and Liu, J. S. (2017).

Robust variable and interaction selection for logistic regression and general index models.

Journal of the American Statistical Association, pages 1–16.
• McCall et al. (2010) McCall, M. N., Bolstad, B. M., and Irizarry, R. A. (2010). Frozen robust multiarray analysis (fRMA). Biostatistics, 11(2):242–253.
• Negahban et al. (2012) Negahban, S. N., Ravikumar, P., Wainwright, M. J., and Yu, B. (2012). A unified framework for high-dimensional analysis of $m$-estimators with decomposable regularizers. Statistical Science, 27(4):538–557.
• Nelder (1977) Nelder, J. A. (1977). A reformulation of linear models. Journal of the Royal Statistical Society. Series A (General), 140(1):48.
• Raskutti et al. (2010) Raskutti, G., Wainwright, M. J., and Yu, B. (2010). Restricted eigenvalue properties for correlated gaussian designs. Journal of Machine Learning Research, 11(Aug):2241–2259.
• Raskutti et al. (2011) Raskutti, G., Wainwright, M. J., and Yu, B. (2011).

Minimax rates of estimation for high-dimensional linear regression over $\ell_q$-balls.

IEEE Transactions on Information Theory, 57(10):6976–6994.
• Rudelson and Zhou (2011) Rudelson, M. and Zhou, S. (2011). Reconstruction from anisotropic random measurements. Manuscript. arXiv:1106.1151.
• Shah (2016) Shah, R. D. (2016). Modelling interactions in high-dimensional data with backtracking. Journal of Machine Learning Research, 17(1):7225–7255.
• She et al. (2018) She, Y., Wang, Z., and Jiang, H. (2018). Group regularized estimation under structural hierarchy. Journal of the American Statistical Association, 113(521):445–454.
• Stein (1981) Stein, C. M. (1981). Estimation of the mean of a multivariate normal distribution. The Annals of Statistics, 9(6):1135–1151.
• Sun et al. (2015) Sun, D., Toh, K.-C., and Yang, L. (2015). A convergent 3-block semiproximal alternating direction method of multipliers for conic programming with 4-type constraints. SIAM Journal on Optimization, 25(2):882–915.
• Wu et al. (2013) Wu, G., Yustein, J. T., McCall, M. N., Zilliox, M., Irizarry, R. A., Zeller, K., Dang, C. V., and Ji, H. (2013). ChIP-PED enhances the analysis of ChIP-seq and ChIP-chip data. Bioinformatics.
• Yuan et al. (2009) Yuan, M., Joseph, V. R., and Zou, H. (2009). Structured variable selection and estimation. The Annals of Applied Statistics, 3(4):1738–1757.
• Zhao et al. (2009) Zhao, P., Rocha, G., and Yu, B. (2009). The composite absolute penalties family for grouped and hierarchical variable selection. The Annals of Statistics, 37(6A):3468–3497.