Parametric Programming Approach for Powerful Lasso Selective Inference without Conditioning on Signs

04/21/2020
by   Vo Nguyen Le Duy, et al.
0

In the past few years, Selective Inference (SI) has been actively studied for inference on the features of linear models that are adaptively selected by feature selection methods. A seminal work is proposed by Lee et al. (2016) in the case of the Lasso. The basic idea of SI is to make inference conditional on the selection event. In Lee et al. (2016), the authors proposed a tractable way to conduct inference conditional on the selected features and their signs. Unfortunately, additionally conditioning on the signs leads to low statistical power because of over-conditioning. To improve the power, a current available possible solution is to remove the conditioning on signs by considering the union of an exponentially large number of all possible sign vectors, which leads to an unrealistically large amount of computational cost unless the number of selected features is sufficiently small. To address this problem, we propose an efficient method to characterize the selection event without conditioning on signs by using parametric programming. The main idea is to compute the continuum path of Lasso solutions in the direction of a test statistic, and identify the subset of data space corresponding to the feature selection event by following the solution path. We conduct several experiments to demonstrate the effectiveness and efficiency of our proposed method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/25/2020

More Powerful and General Selective Inference for Stepwise Feature Selection using the Homotopy Continuation Approach

Conditional Selective Inference (SI) has been actively studied as a new ...
05/11/2021

More Powerful Conditional Selective Inference for Generalized Lasso by Parametric Programming

Conditional selective inference (SI) has been studied intensively as a n...
10/14/2019

More Powerful Selective Kernel Tests for Feature Selection

Refining one's hypotheses in the light of data is a commonplace scientif...
02/21/2020

Computing Valid p-value for Optimal Changepoint by Selective Inference using Dynamic Programming

There is a vast body of literature related to methods for detecting chan...
02/26/2019

Fused Lasso for Feature Selection using Structural Information

Feature selection has been proven a powerful preprocessing step for high...
12/08/2015

Selective Sequential Model Selection

Many model selection algorithms produce a path of fits specifying a sequ...
11/18/2016

Finding Alternate Features in Lasso

We propose a method for finding alternate features missing in the Lasso ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In many practical applications, especially in the case of high-dimensional problems, model selection — selecting a subset of features highly related with response variable — is crucially important. A popular approach to achieve this task is the Lasso

Tibshirani (1996). Although various properties of Lasso have been extensively studied in the past decades (see, e.g., Hastie et al. (2015)), exact statistical inference such as computing

-values or confidence intervals for

adaptively selected features by Lasso has only recently begun to be actively studied Lee et al. (2016); Fithian et al. (2014).

A recent seminal work is the selective inference (SI also a.k.a. post-selection inference) for the Lasso proposed by Lee et al Lee et al. (2016). The main idea is to make inference for the selected features conditional on the selection event of the Lasso. By conditioning on the selection event, exact valid inference on adaptively selected features by Lasso is possible in the sense that -values for proper false positive rate control or confidence intervals with proper coverage guarantees can be obtained. After the seminal work, conditional inference-based SI has been actively studied and applied to various problems Bachoc et al. (2014); Fithian et al. (2014, 2015); Choi et al. (2017); Tian et al. (2018); Chen and Bien (2019); Hyun et al. (2018); Bachoc et al. (2018); Charkhi and Claeskens (2018); Loftus and Taylor (2014); Loftus (2015); Panigrahi et al. (2016); Tibshirani et al. (2016); Yang et al. (2016); Suzumura et al. (2017); Tanizaki et al. (2020); Duy et al. (2020).

The main challenge when developing a conditional inference-based SI method is to fully characterize the selection event. In the case of Lasso, the authors in Lee et al. (2016) showed that the selection event can be characterized as a finite set of affine constraints, which leads to the sampling distribution of relevant test-statistic in the form of a truncated Normal distribution

when the error is normally distributed.

Let be a subset of the selected features by applying Lasso on any random data sample and be their signs. Then, it has been shown that the selection event is represented by a polytope in the data space, where and are the corresponding observations (see §2 for detailed notations and setup). However, it is well-known that additionally conditioning on the signs leads to low statistical power because of over-conditioning, which is widely recognized as a major drawback of the current Lasso SI approach and almost all the following studies Fithian et al. (2014); Lee and Taylor (2014); Fithian et al. (2015); Taylor and Tibshirani (2018).

The authors in Lee et al. (2016) discussed the solution to overcome the drawback by conducting inferences conditional on the selection event without signs , which can be characterized by polytopes. If the number of selected features is moderate (e.g., up to 15), it is feasible to consider the whole affine constraints of all these polytopes. However, if is large, it becomes impossible to enumerate all the affine constraints for exponentially increasing number of polytopes.

In the other direction, Liu et al. Liu et al. (2018) proposed an approach for improving the power in the case where the inference target is full model coefficients. Unfortunately, this approach can be applied only to full-model target case which is often not the main interest in post-selection inference literature, and it is not even applicable when the number of features is greater than the number of instances . As other recent approaches to improve the power, Tian et al. Tian et al. (2018) and Terada et al. Terada and Shimodaira (2019) proposed methods using randomization. A drawback of these randomization-based approaches including simple data-splitting approach is that further randomness is added in both feature selection and inference stages.

Contribution.

We present a general deterministic method for resolving the low statistical power issue of current Lasso SI by using parametric programming (a.k.a. homotopy methods) Ritter (1984); Allgower and George (1993); Gal (1995); Best (1996), which is motivated by Liu et al. Liu et al. (2018)

and the discussion therein. Our main idea is to compute the continuum path of Lasso solutions in the direction of interest, and compute the tail probability of the truncated Normal distribution by following the solution path. We show that the continuum path of Lasso solution along the direction of the test-statistic can be exactly and efficiently computed by piecewise-linear homotopy computation. One might wonder how we can circumvent the computational bottleneck of exponentially increasing number of polytopes. Our experience suggests that, by focusing on the the line along the test-statistic in data space, we can skip majority of the polytopes that do not affect the truncated Normal sampling distribution because they do not intersect with this line. We demonstrate the efficiency of the proposed method through experiments in which we show that Lasso SI without conditioning on signs can be done even when there are hundreds of selected features. Parametric programming has been used in various statistical and machine learning problems 

Osborne et al. (2000); Efron and Tibshirani (2004); Hastie et al. (2004); Rosset (2005); Bach et al. (2006); Rosset and Zhu (2007); Tsuda (2007); Lee and Scott (2007); Takeuchi et al. (2009); Karasuyama and Takeuchi (2010); Hocking et al. (2011); Karasuyama et al. (2012). However, to the best of our knowledge, this is the first work showing that piecewise-linear parametric programming or homotopy method can be effectively used for characterizing the selection events in SI. Figure 1 shows the schematic illustration and the efficiency of the method we propose in this paper.

(a) Schematic illustration of the proposed method
(b) The efficiency of the proposed method
Figure 1: Advantages of the proposed method. In Figure (a), we show the schematic illustration. By applying lasso with the observed data , we obtain active set . The statistical inference for each selected feature is conducted conditional on the subspace whose data has the same active set as . We introduce a novel parametric programing method for efficiently characterizing the conditional data space by searching on the parametrized line. Figure (b) shows a good performance of the proposed method in terms of computational efficiency. For the existing studies, if they want to keep high statistical power, they have to consider a huge number of possible sign vectors , which is unrealistic. With the proposed method, we can easily complete this task.

2 Problem Statement

To formulate the problem, we consider a random response vector

(1)

where is the number of instances, is modeled as a linear function of features , and

is a covariance matrix which is known or estimable from independent data. The goal is to statistically quantify the significance of the relation between the features and response while properly controlling the false positive rate. To achieve the goal, the authors in

Lee et al. (2016) have proposed a practical SI framework, in which a subset of features is first “selected” by the Lasso, and the inferences are then conducted for each selected feature.

Feature selection and its selection event.

Given an observed response vector sampled from the model (1), the Lasso optimization problem is given by

(2)

where is a feature matrix, and is a regularization parameter. Since the Lasso produces sparse solutions, the active set selected by applying the Lasso to is defined as

Then, the event that the Lasso active set for a random vector is the same as is written as

The authors in Lee et al. (2016) showed that the selection event can be characterized by a set of linear inequalities.

Statistical inference for the selected feature.

For the inference on the selected feature in , we consider the following statistical test

(3)

A natural choice of the test statistic is defined as , where in which is a unit vector whose element is 1 and 0 otherwise. Since the hypothesis is generated from the data, selection bias exists. In order to correct the selection bias, we have to remove the information that has been used for initial hypothesis generating process. This is achieved by considering the sampling distribution of the test statistic conditional on the selection event, i.e.,

(4)

where with . The second condition is additionally added for technical tractability Fithian et al. (2014); Lee et al. (2016), which indicates the component that is independent of the test statistic for a random vector is the same as the one for .

Once the selection event is identified, we can easily compute the pivotal quantity

(5)

which is the c.d.f. of the truncated Normal distribution with mean

, variance

, and the truncation region which is calculated based on the selection event. The pivotal quantity is crucial for calculating -value or obtaining confidence interval. Based on the pivotal quantity, we can consider

selective type I error

or selective -value Fithian et al. (2014) in the form of

(6)

which is valid in the sense that

Furthermore, to obtain confidence interval for any , by inverting the pivotal quantity in Equation (5), we can find the smallest and largest values of such that the value of pivotal quantity remains in the interval Lee et al. (2016).

However, the main challenge is that characterizing in Equation (4) is computationally intractable because we have to consider possible sign vectors. To overcome this issue, the authors in Lee et al. (2016) consider inference conditional not only on the selected features but also on their signs. Unfortunately, additionally considering the signs leads to low statistical power because of over-conditioning. In the next section, we will provide an efficient method for identifying , which enables us to easily obtain the minimum amount of conditioning, leading to high statistical power.

3 Proposed Method

In this section, we propose to use parametric programming for efficiently identifying the conditioning event . The schematic illustration of our idea is shown in Figure 1.

3.1 Characterization of Conditional Data Space

Let us define the conditional data space in Equation (4) as

(7)

According to the second condition, the data in is restricted to a line Fithian et al. (2014); Liu et al. (2018). Therefore, the set can be re-written, using a scalar parameter , as

(8)

where , , and

(9)

Now, let us consider a random variable

and its observation , which satisfy and . The conditional inference in (4) is re-written as the problem of characterizing the sampling distribution of

(10)

Since

under the null hypothesis, the law of

follows a truncated Normal distribution. Once the truncation region is identified, the pivotal quantity in Equation (5) is equal to , and can be easily obtained. Thus, the remaining task is to characterize .

Characterization of truncation region .

Let us introduce the optimization problem (2) with parametrized response vector for as

(11)

The subdifferential of the -norm at is defined as follows:

where we denote . Then, for any in , the optimality condition is given by

(12)

To construct the truncation region in Equation (9), we have to 1) compute the entire path of , and 2) identify the set of intervals of on which . However, it seems intractable to compute for infinitely many values of . Our main idea to overcome this difficulty is to propose a parametric programming method for efficiently computing a finite number of “transition points” at which the active set changes.

3.2 A Piecewise Linear Homotopy

We now derive the main technique of the proposed method. We show that is a piecewise linear function of . To make the notation lighter, we write , and we denote the set of inactive features as .

Lemma 1.

Consider two real values and . Suppose for all , for all , and is invertible. If and have the same active set and the same signs, then we have

(13)
(14)

where , and .

Proof.

From the optimality conditions of the Lasso, we have

(15)
(16)

Then, by substracting (15) from (16) and , we have

Thus, we achieve Equation (13). Next, from the optimality conditions of the Lasso, we also have

(17)
(18)

Similarly, by substracting (17) from (18) and , we can easily achieve Equation (14).

Remark 1.

In this paper, we assume the uniqueness of the Lasso solution for all as well as for all and the invertibility of . These assumptions are justified by assuming the columns of are in general position Tibshirani and others (2013). Parametric programming methods for handling the rare cases where these assumptions are not satisfied have been studied, e.g., in Best (1996), and can be applied to our problem setup. In practice, when the design matrix is not in general position, it is also common to introduce an additional ridge penalty term, resulting in the elastic net Zou and Hastie (2005). Our proposed method can be extended for the elastic net case (see Appendix for the details).

Computation of the transition point.

From Lemma 1, the solution is a linear function of until reaches a transition point at which either an element of becomes zero or a component of becomes one in absolute value. We now introduce how the transition point is identified.

Lemma 2.

Let be a real value such that . Then, , , and for any real value in the interval , where is the value of transition point,

(19)
(20)

Here, we use the convention that for any real number , if , and otherwise.

Proof.

From Equation (13), we can see that is a function of . For a real value , there exists such that for any real value in , all elements of remain the same signs with . Similarly, from Equation (14), we can see that is a function of . Then, for a real value , there exists such that for any real value in , all elements of are smaller than 1 in absolute value. Finally, by taking , we obtain the interval in which the active set and signs of Lasso solution remain the same. The remaining task is how to compute and . We defer the detailed derivations of and to the Appendix. ∎

3.3 Algorithm

0:  
1:  Compute Lasso solution and obtain for data
2:  for each selected feature  do
3:      Compute , and then calculate and based on and Equation (8)
4:      
5:      Identify truncation region
6:       Equation (6) (andor selective confidence interval of )
7:  end for
7:   (andor selective confidence intervals of )
Algorithm 1 parametric_lasso_SI
0:  
1:  Initialization: , ,
2:  while  do
3:      
4:      
5:      , , and ( is the value of the next transition point)
6:  end while
6:  
Algorithm 2 compute_solution_path
0:  
1:  Compute primaldual Lasso solution for data
2:  Obtain active set
3:  Compute , Lemma 1, and , Equation (20) in Lemma 2
4:  
4:  
Algorithm 3 compute_step_size

In this section, we show the detailed algorithm of our proposed parametric programming method. In Algorithm 1, for feature selection step, we just simply apply Lasso to the data , and obtain the active set . Then, we conduct SI for each selected feature. For testing we first obtain the direction of interest , which can be easily computed as in §2. Second, the main task is to compute the solution path of in Equation (11) for the parametrized response vector , where, note that, the parametrized solution are different among different since the direction of interest depends on . This task can be done by Algorithm 2. Finally, after having the path, we can easily obtain truncation region which is used to compute selective -value or selective confidence interval.

In Algorithm 2, a sequence of transition points are computed one by one. The algorithm is initialized at . At each , the task is to find the next transition point , where the active set changes. This task can be done by computing the step size in Algorithm 3. This step is repeated until . The algorithm returns the sequences of Lasso solutions and transition points.

Choice of .

According to Liu et al. (2018), very positive and very negative values of does not affect the inference. Therefore, it is reasonable to consider range of values , where

is the standard deviation of the sampling distribution of test statistic.

4 Numerical Experiments

False positive rate (FPR) and True positive rate (TPR).

We show the FPRs and TPRs of our proposed method for the following two cases of conditional inferences:

  • TN-A: .

  • TN-As: , where is the sign vector of Lasso solutions on , and is the sign vector of the Lasso solutions on .

We also additionally show the FPRs and TPRs of data splitting (DS) method, which is the commonly used procedure for the purpose of selection bias correction. In this approach, the data is randomly divided in two halves — first half used for model selection and the other for inference.

We generated outcomes as , , where in which , and . We set the regularization parameter , significance level . For the FPR experiments, all elements of were set to 0. For the TPR experiments, the first two elements of were set to 0.25. We ran 100 trials for each , and we repeated this experiments 10 times. The results are shown in Figure 2. The results are consistent with the discussion in Lee et al. (2016). The TN-A obviously has higher power than TN-As because we conduct inference conditional only on the set of selected features, i.e., minimum amount of conditioning. However, the method proposed in Lee et al. (2016) requires a huge amount of computing time while our proposed parametric programming approach can easily complete the task. We will demonstrate this advantage of our method in the latter part of this section.

Figure 2: Demonstration of False positive rate (FPR) and True positive rate (TPR).

Confidence Interval.

We generated outcomes as , , where in which , and . The first 5 elements of were set to 0.25, and was set to 1. In the cases of TN-A and TN-As, 9 features were selected by the Lasso while only 8 features were selected in the case of DS. Therefore, we only show the 95% confidence interval of the features that are selected in both cases on the left side of Figure 3. We repeated this experiment 100 times and showed the boxplot of the lengths of the confidence intervals on the right side of Figure 3.

Figure 3: Demonstration of confidence intervals. The left figure shows 95% confidence intervals constructed for the regression coefficient variables selected by the Lasso, and the right figure shows the boxplot of the lengths corresponding to each kind of conditional inferences.

Efficiency of the proposed method.

We demonstrate the efficiency of the proposed method by comparing the computing time with the existing method proposed in Lee et al. (2016) when the number of active features is small. We then show the computing time of our method for the case when the number of active features is large, in which existing method requires a huge burden of calculating cost or can not complete the task in realistic time.

We considered two cases: and . The outcome was generated as , , where , and . We set the regularization parameter for the first case, and for the second case. For the comparison experiments with the existing method when the number of active features is small, the first components of are set to for each . For the experiments showing the computing time of the proposed method, when the number of active features is large, the first components of were set to for each , and we ran 10 trials for each case. The results are shown in Figure 4.

Figure 4: Efficiency of the proposed method. In the first and second figures, we compare the time needed to test a selected feature of the existing method with our prametric programming method. In the last figure, we show the computing time of the proposed method when the size of active set is large. For this case, the existing method is computationally intractable since the enumeration of possible sign vectors is required.

Uniformity verification of the pivotal quantity.

We generated outcomes as , , where , and . We set the first two elements of to 2, and set . We ran 1,200 trials for each case of conditioning: TN-A and TN-As, and verified the uniform QQ-plot of the pivotal quantity when performing our proposed method. The detailed results are deferred to Appendix.

Overall, the results indicate that the proposed method is valid in the sense that the false finding probability is properly controlled under the pre-defined significance level (e.g, 0.05), and significantly more efficient compared to the existing method in terms of computational cost. Since our method overcomes the computational challenge of conducting inference conditional only on the set of selected features, we now can effectively preserve the high statistical power while successfully controlling the false finding probability.

5 Conclusion

In this paper, we have introduced an efficient method for characterizing the selection event of Lasso SI by using piecewise-linear parametric programing. With the proposed method, we can maintain the high statistical power by conditioning only on the features without the need of enumerating all possible sign vectors. We conducted several experiments to demonstrate the good performance of the proposed method in terms of FPR control, power and computational efficiency.

References

  • E. L. Allgower and K. George (1993) Continuation and path following. Acta Numerica 2, pp. 1–63. Cited by: §1.
  • F. R. Bach, D. Heckerman, and E. Horvits (2006)

    Considering cost asymmetry in learning classifiers

    .
    Journal of Machine Learning Research 7, pp. 1713–41. Cited by: §1.
  • F. Bachoc, G. Blanchard, P. Neuvial, et al. (2018) On the post selection inference constant under restricted isometry properties. Electronic Journal of Statistics 12 (2), pp. 3736–3757. Cited by: §1.
  • F. Bachoc, H. Leeb, and B. M. Pötscher (2014) Valid confidence intervals for post-model-selection predictors. arXiv preprint arXiv:1412.4605. Cited by: §1.
  • M. J. Best (1996) An algorithm for the solution of the parametric quadratic programming problem. Applied Mathemetics and Parallel Computing, pp. 57–76. Cited by: §1, Remark 1.
  • A. Charkhi and G. Claeskens (2018) Asymptotic post-selection inference for the akaike information criterion. Biometrika 105 (3), pp. 645–664. Cited by: §1.
  • S. Chen and J. Bien (2019)

    Valid inference corrected for outlier removal

    .
    Journal of Computational and Graphical Statistics, pp. 1–12. Cited by: §1.
  • Y. Choi, J. Taylor, R. Tibshirani, et al. (2017) Selecting the number of principal components: estimation of the true rank of a noisy matrix. The Annals of Statistics 45 (6), pp. 2590–2617. Cited by: §1.
  • V. N. L. Duy, H. Toda, R. Sugiyama, and I. Takeuchi (2020) Computing valid p-value for optimal changepoint by selective inference using dynamic programming. arXiv preprint arXiv:2002.09132. Cited by: §1.
  • B. Efron and R. Tibshirani (2004) Least angle regression. Annals of Statistics 32 (2), pp. 407–499. Cited by: §1.
  • W. Fithian, D. Sun, and J. Taylor (2014) Optimal inference after model selection. arXiv preprint arXiv:1410.2597. Cited by: §1, §1, §1, §2, §2, §3.1.
  • W. Fithian, J. Taylor, R. Tibshirani, and R. Tibshirani (2015) Selective sequential model selection. arXiv preprint arXiv:1512.02565. Cited by: §1, §1.
  • T. Gal (1995) Postoptimal analysis, parametric programming, and related topics. Walter de Gruyter. Cited by: §1.
  • T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu (2004)

    The entire regularization path for the support vector machine

    .
    Journal of Machine Learning Research 5, pp. 1391–415. Cited by: §1.
  • T. Hastie, R. Tibshirani, and M. Wainwright (2015) Statistical learning with sparsity: the lasso and generalizations. CRC press. Cited by: §1.
  • T. Hocking, j. P. Vert, F. Bach, and A. Joulin (2011) Clusterpath: an algorithm for clustering using convex fusion penalties. In Proceedings of the 28th International Conference on Machine Learning, pp. 745–752. Cited by: §1.
  • S. Hyun, K. Lin, M. G’Sell, and R. J. Tibshirani (2018) Post-selection inference for changepoint detection algorithms with application to copy number variation data. arXiv preprint arXiv:1812.03644. Cited by: §1.
  • M. Karasuyama, N. Harada, M. Sugiyama, and I. Takeuchi (2012) Multi-parametric solution-path algorithm for instance-weighted support vector machines. Machine Learning 88 (3), pp. 297–330. Cited by: §1.
  • M. Karasuyama and I. Takeuchi (2010) Nonlinear regularization path for quadratic loss support vector machines.

    IEEE Transactions on Neural Networks

    22 (10), pp. 1613–1625.
    Cited by: §1.
  • G. Lee and C. Scott (2007) The one class support vector machine solution path. In Proc. of ICASSP 2007, pp. II521–II524. Cited by: §1.
  • J. D. Lee, D. L. Sun, Y. Sun, J. E. Taylor, et al. (2016) Exact post-selection inference, with application to the lasso. The Annals of Statistics 44 (3), pp. 907–927. Cited by: Parametric Programming Approach for Powerful Lasso Selective Inference without Conditioning on Signs, §1, §1, §1, §1, §2, §2, §2, §2, §2, §4, §4.
  • J. D. Lee and J. E. Taylor (2014) Exact post model selection inference for marginal screening. In Advances in neural information processing systems, pp. 136–144. Cited by: §1.
  • K. Liu, J. Markovic, and R. Tibshirani (2018) More powerful post-selection inference, with application to the lasso. arXiv preprint arXiv:1801.09037. Cited by: §1, §1, §3.1, §3.3.
  • J. R. Loftus and J. E. Taylor (2014) A significance test for forward stepwise model selection. arXiv preprint arXiv:1405.3920. Cited by: §1.
  • J. R. Loftus (2015) Selective inference after cross-validation. arXiv preprint arXiv:1511.08866. Cited by: §1.
  • M. R. Osborne, B. Presnell, and B. A. Turlach (2000) A new approach to variable selection in least squares problems. IMA Journal of Numerical Analysis 20 (20), pp. 389–404. Cited by: §1.
  • S. Panigrahi, J. Taylor, and A. Weinstein (2016) Bayesian post-selection inference in the linear model. arXiv preprint arXiv:1605.08824 28. Cited by: §1.
  • K. Ritter (1984) On parametric linear and quadratic programming problems. mathematical Programming: Proceedings of the International Congress on Mathematical Programming, pp. 307–335. Cited by: §1.
  • S. Rosset and J. Zhu (2007) Piecewise linear regularized solution paths. Annals of Statistics 35, pp. 1012–1030. Cited by: §1.
  • S. Rosset (2005) Following curved regularized optimization solution paths. In Advances in Neural Information Processing Systems 17, pp. 1153–1160. Cited by: §1.
  • S. Suzumura, K. Nakagawa, Y. Umezu, K. Tsuda, and I. Takeuchi (2017) Selective inference for sparse high-order interaction models. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3338–3347. Cited by: §1.
  • I. Takeuchi, K. Nomura, and T. Kanamori (2009)

    Nonparametric conditional density estimation using piecewise-linear solution path of kernel quantile regression

    .
    Neural Computation 21 (2), pp. 539–559. Cited by: §1.
  • K. Tanizaki, N. Hashimoto, Y. Inatsu, H. Hontani, and I. Takeuchi (2020) Computing valid p-values for image segmentation by selective inference. Cited by: §1.
  • J. Taylor and R. Tibshirani (2018) Post-selection inference for-penalized likelihood models. Canadian Journal of Statistics 46 (1), pp. 41–61. Cited by: §1.
  • Y. Terada and H. Shimodaira (2019) Selective inference after variable selection via multiscale bootstrap. arXiv preprint arXiv:1905.10573. Cited by: §1.
  • X. Tian, J. Taylor, et al. (2018) Selective inference with a randomized response. The Annals of Statistics 46 (2), pp. 679–710. Cited by: §1, §1.
  • R. Tibshirani (1996) Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological) 58 (1), pp. 267–288. Cited by: §1.
  • R. J. Tibshirani et al. (2013) The lasso problem and uniqueness. Electronic Journal of statistics 7, pp. 1456–1490. Cited by: Remark 1.
  • R. J. Tibshirani, J. Taylor, R. Lockhart, and R. Tibshirani (2016) Exact post-selection inference for sequential regression procedures. Journal of the American Statistical Association 111 (514), pp. 600–620. Cited by: §1.
  • K. Tsuda (2007) Entire regularization paths for graph data. In In Proc. of ICML 2007, pp. 919–925. Cited by: §1.
  • F. Yang, R. F. Barber, P. Jain, and J. Lafferty (2016) Selective inference for group-sparse linear models. In Advances in Neural Information Processing Systems, pp. 2469–2477. Cited by: §1.
  • H. Zou and T. Hastie (2005) Regularization and variable selection via the elastic net. Journal of the royal statistical society: series B (statistical methodology) 67 (2), pp. 301–320. Cited by: §A.2, Remark 1.

Appendix A Appendix

a.1 Detailed Proof for Lemma 2

From Equation (13), we can see that is a function of . For a real value , there exists such that for any real value in , all elements of remain the same signs with . Similarly, from Equation (14), we can see that is a function of . Then, for a real value , there exists such that for any real value in , all elements of are smaller than 1 in absolute value. Finally, by taking , we obtain the interval in which the active set and signs of lasso solution remain the same. The remaining task is to compute and .

We first show how to derive . From Equation (13), we have

To guarantee and have the same signs,

(21)

For a specific , we consider the following cases:

  • If , then .

    • If , then (This inequality always holds since the left hand side is positive while the right hand side is negative).

    • If , then .

  • If , then .

    • If , then .

    • If , then (This inequality always holds since the left hand side is positive while the right hand side is negative).

Finally, for satisfying the condition in Equation (21),

We next show how to derive . From Equation (14), we have

To guarantee ,

(22)

For a specific , we have the following cases:

  • If , then .

  • If , then .

Note that the first inequalities of the above two cases always hold since the left hand side is negative while the right hand side is positive). Then, for satisfying the condition in Equation (22),

Finally, we can compute by taking .

a.2 Extension to Elastic Net

In some cases, the lasso solutions are unstable. One way to stabilize them is to add an penalty to the objective function, resulting in the elastic net Zou and Hastie [2005]. Therefore, we extend our proposed method and provide detailed derivation for testing the selected features in elastic net case. We now consider the optimization problem with parametrized response vector for as follows

(23)

For any in , the optimality condition is given by

(24)

Similar to lasso case, to construct the truncation region , we have to 1) compute the entire path of in Equation (23), and 2) identify a set of intervals of on which .

Lemma 3.

Let us consider two real values and . If and have the same active set and the same signs, then we have

(25)
(26)

where