More Powerful and General Selective Inference for Stepwise Feature Selection using the Homotopy Continuation Approach

12/25/2020
by   Kazuya Sugiyama, et al.
0

Conditional Selective Inference (SI) has been actively studied as a new statistical inference framework for data-driven hypotheses. For example, conditional SI framework enables exact (non-asymptotic) inference on the features selected by stepwise feature selection (SFS) method. The basic idea of conditional SI is to make inference conditional on the selection event. The main limitation of existing conditional SI approach for SFS method is the loss of power due to over-conditioning for computational tractability. In this paper, we develop more powerful and general conditional SI method for SFS by resolving the over-conditioning issue by homotopy continuation approach. We conduct several experiments to demonstrate the effectiveness and efficiency of our proposed method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/11/2021

More Powerful Conditional Selective Inference for Generalized Lasso by Parametric Programming

Conditional selective inference (SI) has been studied intensively as a n...
04/21/2020

Parametric Programming Approach for Powerful Lasso Selective Inference without Conditioning on Signs

In the past few years, Selective Inference (SI) has been actively studie...
10/05/2020

Quantifying Statistical Significance of Neural Network Representation-Driven Hypotheses by Selective Inference

In the past few years, various approaches have been developed to explain...
10/14/2019

More Powerful Selective Kernel Tests for Feature Selection

Refining one's hypotheses in the light of data is a commonplace scientif...
02/21/2020

Computing Valid p-value for Optimal Changepoint by Selective Inference using Dynamic Programming

There is a vast body of literature related to methods for detecting chan...
02/15/2016

Selective Inference Approach for Statistically Sound Predictive Pattern Mining

Discovering statistically significant patterns from databases is an impo...
05/29/2019

Discovering Conditionally Salient Features with Statistical Guarantees

The goal of feature selection is to identify important features that are...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As machine learning (ML) is being applied to a greater variety of practical problems, ensuring the reliability of ML is recognized as becoming increasingly important. Among several potential approaches to reliable ML,

conditional selective inference (SI) is recognized as a promising approach for evaluating the statistical reliability of data-driven hypotheses selected by ML methods. The basic idea of conditional SI is to make inference on a data-driven hypothesis conditional on the selection event that the hypothesis is selected by analyzing the data with the ML algorithm. Conditional SI has been actively studied especially in the context of feature selection. Notably, Lee et al. [19] and Tibshirani et al. [40] proposed conditional SI methods for exact conditional inference on selected features by using Lasso and stepwise feature selection (SFS), respectively. The basic idea of these conditional SI methods is to characterize the hypothesis selection event by a polytope, i.e., a set of linear inequalities, in the sample space. When a hypothesis selection event is characterized by a polytope, the authors in these studies developed practical methods for making inference on the selected hypotheses by deriving the exact (non-asymptotic) sampling distribution conditional on the polytope. In this paper, we call conditional SI based on a polytope polytope-based SI. These studies are regarded as significant advance in the field of statistical inference since traditional statistical inference cannot cope with hypotheses selected after observing the data.

Unfortunately, however, polytope-based SI has several limitations because it can be used only when the characterization of all the relevant selection events is represented by a polytope. In fact, in most of the existing polytope-based SI studies, extra-conditioning is required in order for the selection event to be characterized as a polytope. For example, in the SI for SFS method of Tibshirani et al. [40], the authors needed to consider conditioning not only on the selected features but also on extra event regarding the signs and the orders in the feature selection process. Such extra-conditioning leads to loss of power in the inference [11].

In this paper, we go beyond polytope-based SI and propose a novel SI method by using homotopy continuation for conditional inference on the features selected by SFS. We call the proposed method homotopy-based SI. Compared to the polytope-based SI for SFS method of [40]

, the proposed method is more powerful and more general. The basic idea of homotopy-based SI is to use the homotopy continuation approach to keep track of the hypothesis selection event when the data changes in the direction of the selected test statistic, which enables efficient identification of the subspace of the sample space in which the same hypothesis is selected. We demonstrate the effectiveness of the proposed homotopy-based SI method through intensive simulations and real data analyses.

Related Work

A fundamental issue in current data analysis with more increasingly complicated datasets and algorithms is post-selection inference. Traditional statistical inference presumes that, before observing the data, we have decided on a statistical model and a statistical target for which we seek to conduct inference. Therefore, if we apply traditional statistical inference methods to hypotheses selected after observing the data, a selection bias is introduced and the inferential results are no longer valid. This issue has been extensively discussed in the context of post-feature-selection inference. In fact, even in commonly-used feature selection methods such as SFS, it has been difficult to correctly assess the statistical reliability of the selected features.

There have been several approaches suggested in the literature toward addressing this problem [3, 21, 20, 2, 30, 4, 23, 37]. A particularly notable approach that has received considerable attention in the past few years is conditional selective inference introduced in the seminal paper by Lee et al. [19]. In their work, the authors showed that, for a feature selected by Lasso, the hypothesis selection event can be characterized as a polytope by conditioning on the selected set of features as well as extra events on their signs, and exact conditional inference can be conducted by exploiting the polyhedral selection event (polytope-based SI). Furthermore, Tibshirani et al. [40] showed that polytope-based SI is also applicable to SFS by conditioning on the set of selected features and by imposing some extra conditions regarding the signs and the orders in the feature selection process. Conditional SI has been actively studied in the past few years and exntended to various directions [12, 6, 39, 5, 15, 24, 25, 29, 40, 42, 32, 36, 9, 7].

Note that in conditional SIs, we typically prefer to condition on as little as possible [11], so that the resulting inference can be more powerful. Often, there are statistical reasons that make it necessary to condition [11], but other times the reasons are purely computational (e.g. conditioning on the signs and the orders in the SFS process). Namely, the main limitations of current polytope-based SI methods are that excessive conditioning is required to represent the selection event with a single polytope, and that selection events that cannot be represented with a polytope cannot be handled properly. In the seminal paper by Lee et al. [19], the authors already discussed the issue of over-conditioning, and explained how conditioning on signs can be omitted by an exhaustive enumeration of all possible signs and taking the union over the resulting polyhedra. However, such an exhaustive enumeration of exponentially increasing number of sign combination is feasible only when the number of selected features is fairly small. Several other approaches were proposed to circumvent the drawbacks and the restrictions of polytope-based SI. Loftus et al. [25] extended the polytope-based SI such that their method could handle selection events characterized by quadratic inequalities, but it inherits the over-conditioning issue in polytope-based SI. To improve the power, Tian et al. [39] proposed an approach to randomize the algorithm in order to condition on less. Terada et al. [38] proposed to use bootstrap re-sampling to characterize the selection event more generally. A drawback of these approaches is that additional randomness is introduced into the algorithm and/or the inference.

This study is motivated by Liu et al. [22] and Duy et al. [8]. The former studied Lasso SI for full-model parameters, while the latter extended the basic idea of the former so that it can be also applied to Lasso SI in more general settings including inference on partial-model parameters (exactly same problem setup as [19]). These two studies go beyond the polytope-based SI for more powerful Lasso SI without conditioning on the signs by carefully analyzing the convex optimality conditions of the Lasso solutions. Unfortunately, in SI for SFS, we cannot use the optimality conditions of the solutions because SFS is not formulated as a convex optimization problem. Furthermore, in SI for SFS, we need to consider extra-conditioning not only on the signs but also on the order in the SFS process.

2 Problem Statement

We consider forward stepwise feature selection (SFS) method for regression problem. Let be the number of instances and be the number of original features. We denote the observed dataset as where is the design matrix and

be the response vector. Following the problem setup in existing conditional SI literature such as

[19] and [40], we assume that the observed response is a realization of the following random response vector

(1)

where is the unknown mean vector, and

is the covariance matrix which is known or estimable from independent data, and the design matrix

is assumed to be non-random. For notational simplicity, we assume that each column vector of is normalized to have unit length.

Stepwise feature selection

We consider the standard forward SFS method as studied in [40] — at each step of the SFS method, the feature which most improves the fit is newly added. When each feature has unit length, it is equivalent to the feature which is most correlated with the residual of the least-square regression model fitted with previously selected features. For a response vector and a set of features , let be the residual vector obtained by regressing onto for a set of features , i.e.,

where is the -by-identity matrix, and . Let be the number of selected features by the SFS method111We discuss a situation where the number of selected features is selected by cross-validation in §3.4. In other parts of the paper, we assume that is determined before looking at the data.. When we need to clarify the fact that features are selected by applying -step SFS method to a response vector , we denote the set of selected features as . Similarly, we denote the sequence of selected feature at each step as for , In the SFS method, the selected feature at step is defined as

(2)

where and .

Statistical inference

In order to quantify the statistical significance of the relation between the selected features and the response, we consider the statistical test for each of the coefficient of the selected model parameters

where is the matrix with the set of columns corresponding to . Note that the coefficient is written as by defining

(3)

where is a unit vector whose element is 1 and 0 otherwise. Note that depends on and , but we omit the dependence for notational simplicity. We consider the following statistical test for each of the coefficient

(4)

where is the element of the population least squares , i.e., the projection of onto the column space of .

Conditional Selective Inference (SI)

Since the target of the inference is selected by observing the data

, if we naively apply a traditional statistical test to the problem in eq:hypotheses as if the inference target is pre-determined, the result is not valid (type I error cannot be controlled at the desired significance level) due to

selection bias. To address the selection bias issue, we consider conditional selective inference (SI) framework introduced in [19] and [40]. In conditional SI framework, the inference is conducted based on the following conditional sampling distribution of the test-statistic:

(5)

where

is the nuisance parameters which is independent of the test statistic. The first condition in eq:condition_model indicates the event that the set of selected features obtained by applying the -step SFS method to a random response vector is the same as the ones for . The second condition indicates the nuisance parameters for a random response vector is the same as the one for the observed vector  222The corresponds to the component in the seminal paper (see [19], Sec 5, Eq 5.2 and Theorem 5.2)..

To conduct the conditional inference for eq:condition_model, the main task is to identify the conditional data space

(6)

Once is identified, we can easily compute the pivotal quantity

(7)

where

is the c.d.f. of the truncated Normal distribution with mean

, variance

, and the truncation region . Later, we will explain how is defined in eq:pivotal_quantity is defined. The pivotal quantity is crucial for calculating

-value or obtaining confidence interval. Based on the pivotal quantity, we can obtain

selective type I error or selective -value [11] in the form of

(8)

where . This -value is valid in the sense that

Furthermore, to obtain confidence interval for any , by inverting the pivotal quantity in (7), we can find the smallest and largest values of such that the value of pivotal quantity remains in the interval [19].

Characterization of the conditional data space

Using the second condition in eq:conditional_data_space, the data in is restricted to a line (see [22] and [11]). Therefore, the set can be re-written, using a scalar parameter , as

(9)

where , , and

(10)

Here, like , , and depend on and

, but we omit the subscripts for notational simplicity. Now, let us consider a random variable

and its observation such that they respectively satisfy and . The conditional inference in (5) is re-written as the problem of characterizing the sampling distribution of

(11)

Since , follows a truncated Normal distribution. Once the truncation region is identified, the pivotal quantity in (7) is obtained as . Thus, the remaining task is reduced to the characterization of .

Extra-conditioning in existing conditional SI methods

Unfortunately, it has been considered computationally infeasible to fully identify the truncation region in conditional SI for SFS method. Therefore, in the existing conditional SI studies such as [40], the authors circumvent the computational difficulty by overly conditioning with extra-conditions. Note that over-conditioning is not harmful for selective type-I error control, but it has been known that over-conditioning leads to the loss of power in conditional SI [11]. In fact, the decrease in the power due to over-conditioning is not unique issue for SFS in [40], but is a common major issue in many existing conditional SIs [19]. In the next section, we propose a method to overcome the difficulty by removing the extra-conditions for minimumly-conditioned SI for SFS.

3 Proposed Method

Figure 1: A schematic illustration of the proposed method. By applying -step SFS algorithm on the observed data , we obtain a set of selected features. Then, we parametrize with a scalar parameter in the dimension of test-statistic to identify the subspace whose data has the same set of selected features as has, regardless of the differences in signs and sequential orders. Finally, the valid statistical inference is conducted conditional on . We introduce a homotopy continuation for efficiently characterizing the conditional data space .

As we discussed in §2, to conduct the conditional SI, the truncation region in eq:truncation_region_z must be identified. To construct , our idea is 1) computing for all , and 2) identifying the set of intervals of on which . However, it seems intractable to obtain for infinitely many values of . To overcome the difficulty, we combine two approaches called extra-conditioning and homotopy continuation.

Figure 1 shows the schematic illustration of the proposed method. Our idea is motivated by regularization path of Lasso [28, 10], SVM [13] and other similar methods [31, 1, 31, 41, 18, 33, 35, 17, 14, 16, 27, 34] in which the solution path along the regularization parameter can be computed by analyzing the KKT optimality conditions of the parametrized convex optimization problems. Although SFS cannot be formulated as a convex optimization problem, by introducing the notion of extra-conditioning, we note that conceptually similar approach as homotopy continuation can be used to keep track all possible changes of the selected features when the response vector changes along the direction of the test-statistic.

3.1 Extra-Conditioning

First, we consider not only the feature selection event but also an extra event . In conditional SI for SFS method, the extra event consists of two types of information regarding orders and signs which are necessary for characterizing the process of SFS method. We use a symbol for the former and for the latter. Like the notation , when we need to clarify the fact that they are selected by applying -step SFS method to a response vector , we denote them as and , respectively. Thus, the extra event is denoted as .

The former contains the information on the order in which the features are selected. Concretely, we define to be the selected permutation of the set of selected features . By combining and , the history of how features have been selected so far, i.e., and , can be fully characterized.

The latter contains the information on the sign when each feature is entered to the model. Concretely, we define to be a binary vector with length whose element is defined as

(12)

which indicates the correlation between the selected feature at step and the residual vector after steps.

The following lemma tells that, by conditioning not only on the feature selection event but also on the extra events on sign and order, the over-conditioned truncation region can be simply represented as an interval in the line .

Lemma 1.

For a response vector , the over-conditioned truncation region defined as

is an interval in .

Proof.

By conditioning on the feature selection event and extra event, the triplet is fixed as

From the first two conditions,

Therefore, we have

(13)

By further conditioning on the sign information, the condition is written as

By restricting on a line , the range of is written as

(14)

where

for . ∎

Note that the original conditional SI for SFS in [40] exactly considered only the over-conditioning case in the above lemma. The polytope-based SI is applicable to SFS method only with the extra conditioning . Although it is known that the over-conditioning leads to loss of power [11], it has been recognized to be computationally infeasible to remove the extra conditioning . In the following subsection, we overcome this computational difficulty by homotopy continuation approach.

3.2 Homotopy Continuation

Consider all possible triplets for all possible response vectors in a line . Clearly, the number of such triplets is finite. With a slight abuse of notation, we index each of the triplets by and denote it such as where is the number of the triplets. Lemma 1 indicates that each triplet corresponds to an interval in the line. Without loss of generality, we assume that the left most interval corresponds to , the second left most interval corresponds to , and so on. Then, using an increasing sequence of real numbers , we can write these intervals as . In practice, we do not have to consider the entire line , but it suffices to consider, e.g., where

is the standard error of the test-statistic, since the probability mass in the out of the range is negligibly small. Thus, we set

and in our implementation and consider only the set of the intervals within the range.

Our simple idea is to compute all possible triplets by keeping track of the intervals one by one, and then to compute the truncation region by collecting the intervals in which the set of selected features is the same as the set of actually selected features from the observed data , i.e.,

We call breakpoints. We start from applying the SFS method to the response vector and obtain the first triplet . Then, the next breakpoint is obtained by eq:lemm1_c with . Then, the second triplet is obtained by applying the SFS method to the response vector where is a small value such that for all . This process is repeated until the next breakpoint becomes greater than .

3.3 Algorithm

0:  
1:   Applying -step SFS algorithm to
2:  for each selected feature  do
3:      Compute Equation (3)
4:      Compute and Equation (9)
5:      Truncation region (, , , , )
6:       Equation (8) (andor selective confidence interval)
7:  end for
7:   (andor selective confidence intervals)
Algorithm 1 SFS_conditional_SI

In this section, we present the detail of the proposed method. In Algorithm 1, we apply -step SFS method to the observed dataset , and obtain the set of selected features . Then, for each feature , we first compute the direction of interest by (3). Next, for each feature , we compute the truncation region by the homotopy continuation. Note that are different among different since the direction of interest depends on . This task can be done by Algorithm 2. Finally, after having the truncation region , we can compute selective -values and selective confidence intervals. In Algorithm 2, multiple breakpoints are computed one by one. The algorithm is initialized at . At each , the task is to find the next breakpoint , at which their is a change either in the set of selected features or orders or signs . This step is repeated until .

0:  
1:  Initialization: , ,
2:  while  do
3:      
4:       Applying -step SFS to
5:       Equation eq:lemm1_c
6:      if  then
7:          
8:      end if
9:      
10:  end while
10:  
Algorithm 2 compute_truncation_region

3.4 Selecting the number of steps by cross-validation

In this section, we introduce a method for SI conditional also on the selection of the number of selected features via cross-validation. Consider selecting the number of steps in the SFS method from a given set of candidates where is the number of candidates. Based on the cross-validation on the observed dataset , suppose that is selected as the best one. The test-statistic for the selected feature when applying the SFS method with steps to is then defined as

(15)

The conditional data space in (9) with the event of selecting is then written as

(16)

where . The truncation region can be obtained by the intersection of the following two sets:

Since the former can be obtained by using the method described above, the remaining task is to identify the latter .

For notational simplicity, we consider the case where the dataset is divided into training and validation sets, and the latter is used for selecting . The following discussion can be easily extended to cross-validation scenario. Let us re-write

With a slight abuse of notation, for , let be the set of selected features by applying -step SFS method to . The validation error is then defined as

(17)

where . Then, we can write

Since the validation error in eq:cv_error is a picecewise-quadratic function of , we have a corresponding picecewise-quadratic function of for each . The truncation region can be identified by the intersection of the intervals of in which the validation error corresponding to is minimum among a set of picecewise-quadratic functions for all the other .

Loftus [26] already discussed that it is possible to consider cross-validation event into conditional SI framework. However, his method is highly over-conditioned in the sense that in each single run of a SFS method in the entire process of cross-validation, extra-coniditioning on orders and signs are required. Our method described above is minimumly-conditioned SI in the sense that our inference is conducted based exactly on the conditional sampling distribution of the test-statistic in eq:cv_error without any extra conditions.

4 Experiment

In the section, we demonstrate the performance of the proposed homotopy-based SI. We executed the experiment on Intel(R) Xeon(R) CPU E5-2687W v3 @ 3.10GHz. We show the false positive rates (FPRs), true positive rates (TPRs) and confidence intervals (CIs) for the following cases of conditional SI:

Active: only conditioning on the active set.

ActiveSign: conditioning on the active set and signs.

ActiveOrder: conditioning on the active set and the sequential order that the feature enters the active set.

ActiveSignOrder: conditioning on the active set, signs, and sequential order, which is exactly same as the polytope-based SI in [40].

We also show the FPRs, TPRs, and CIs of data splitting (DS) method, which is the commonly used procedure for the purpose of selection bias correction. In this approach, the data is randomly divided in two halves — first half used for model selection and the other for inference.

We set the significance level for all the experiments. We generated outcomes as , where in which . We set for FPR and TPR experiments, and was respectively set to and . We ran 100 trials and we repeated this experiments 20 times. For CI experiments, we set and .

The results of FPR and TPR are shown in Figure 2 and 2. In all five cases, the FPRs are properly controlled under the significance level . Regarding the TPR comparison, it is obvious that Active has the highest power since we only condition on the selected features. Figure 2 shows the demonstration of CI for each selected feature. Note that we only show commonly selected features among all five cases of conditional inference. The lengths of CI obtained by Active are almost the shortest. We repeated this experiment 100 times and showed the dot plot of the lengths of the confidence intervals in Figure 2. In summary, the CI results are consistent with the TPR results. In other words, the shortest length of CI in the case of Active indicates that Active has the highest power.

(a) FPR.
(b) TPR.

(c) CI demonstration.
(d) The lengths of the CIs.
Figure 2: The experimental results of comparing FPR, TPR and CI between the five cases of conditional inference (Active, ActiveSign, ActiveOrder, ActiveSignOrder, DS).

Especially, we also demonstrate the TPRs and the CIs between the cases when is fixed and is selected from the set , or using -fold cross-validation. We set , only the first elements of was set to , and all the rest were set to . We show that the TPR tends to decrease when increasing the size of in Figure 3. This is due to the fact that when we increase the size of , we have to condition on more information which leads to shorter truncation interval and results low TPR. The TPR results are consistent with the CI results shown in Figure 4 in which the length of CI is longer when increasing the size of .

Figure 3: Demonstration of TPR when accounting cross-validation selection event.
Figure 4: Demonstration of CI length when considering cross-validation selection event.

Next, we demonstrate the computational efficiency of the proposed method. We show the result of comparing the computational time between the proposed method and the existing method using artificial dataset in Figure 5. For the existing studies, if we want to keep high statistical power, we have to consider a huge number of all the signs and order patterns which is unrealistic. With the proposed method, we are able to significantly reduce the computational cost while keeping high power.

Figure 5: The result of comparing the computational time between the proposed method and the existing method using artificial dataset.

We also show the violin plot of the actual number of interval of the test statistic that involves in the construction of truncated sampling distribution using artificial dataset in Figure 6. In this case, the number of polytopes intersecting the line that we need to consider is larger than in the Lasso case [8] because we have conditioned not only the signs but also the order to derive each polytope, but it is much smaller than . When is large, because the ordering constraint is too strict, so the number of intervals of satisfying the condition including the order (ActiveOrder) becomes very small even after removing the sign constraint.

Figure 6: The number of polytopes intersecting the line that we need to consider. The solid lines are shown the sample averages.

We also show the computational time for using the proposed method on artificial dataset and the high dimensional real-world bioinformatics related datasets in Figure 7 and Table 1. The real-world datasets are available at http://www.coepra.org/CoEPrA_regr.html.

Figure 7: The computational time for using the proposed method on artificial dataset.
time (sec)
Dataset 1 89 5787 50 799.13
Dataset 2 76 5144 50 572.34
Dataset 3 133 5787 50 1709.49
Table 1: The computational time for using the proposed method on the high dimensional real-world bioinformatics related datasets.

5 Conclusion

In this paper, we proposed a more powerful and general conditional selective inference method for stepwise feature selection method. We resolve the over-conditioning issue in existing approach by introducing homotopy continuation approach. The experimental results indicate that the proposed homotopy-based approach is more powerful and computationally efficient.

Acknowledgement

This work was partially supported by MEXT KAKENHI (20H00601, 16H06538), JST CREST (JPMJCR1502), RIKEN Center for Advanced Intelligence Project, and RIKEN Junior Research Associate Program.

References

  • [1] F. R. Bach, D. Heckerman, and E. Horvits (2006)

    Considering cost asymmetry in learning classifiers

    .
    Journal of Machine Learning Research 7, pp. 1713–41. Cited by: §3.
  • [2] Y. Benjamini, R. Heller, and D. Yekutieli (2009) Selective inference in complex research. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367 (1906), pp. 4255–4271. Cited by: §1.
  • [3] Y. Benjamini and D. Yekutieli (2005) False discovery rate–adjusted multiple confidence intervals for selected parameters. Journal of the American Statistical Association 100 (469), pp. 71–81. Cited by: §1.
  • [4] R. Berk, L. Brown, A. Buja, K. Zhang, L. Zhao, et al. (2013) Valid post-selection inference. The Annals of Statistics 41 (2), pp. 802–837. Cited by: §1.
  • [5] S. Chen and J. Bien (2019)

    Valid inference corrected for outlier removal

    .
    Journal of Computational and Graphical Statistics, pp. 1–12. Cited by: §1.
  • [6] Y. Choi, J. Taylor, R. Tibshirani, et al. (2017) Selecting the number of principal components: estimation of the true rank of a noisy matrix. The Annals of Statistics 45 (6), pp. 2590–2617. Cited by: §1.
  • [7] V. N. L. Duy, S. Iwazaki, and I. Takeuchi (2020)

    Quantifying statistical significance of neural network representation-driven hypotheses by selective inference

    .
    arXiv preprint arXiv:2010.01823. Cited by: §1.
  • [8] V. N. L. Duy and I. Takeuchi (2020) Parametric programming approach for powerful lasso selective inference without conditioning on signs. arXiv preprint arXiv:2004.09749. Cited by: §1, §4.
  • [9] V. N. L. Duy, H. Toda, R. Sugiyama, and I. Takeuchi (2020) Computing valid p-value for optimal changepoint by selective inference using dynamic programming. arXiv preprint arXiv:2002.09132. Cited by: §1.
  • [10] B. Efron and R. Tibshirani (2004) Least angle regression. Annals of Statistics 32 (2), pp. 407–499. Cited by: §3.
  • [11] W. Fithian, D. Sun, and J. Taylor (2014) Optimal inference after model selection. arXiv preprint arXiv:1410.2597. Cited by: §1, §1, §2, §2, §2, §3.1.
  • [12] W. Fithian, J. Taylor, R. Tibshirani, and R. Tibshirani (2015) Selective sequential model selection. arXiv preprint arXiv:1512.02565. Cited by: §1.
  • [13] T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu (2004)

    The entire regularization path for the support vector machine

    .
    Journal of Machine Learning Research 5, pp. 1391–415. Cited by: §3.
  • [14] T. Hocking, j. P. Vert, F. Bach, and A. Joulin (2011) Clusterpath: an algorithm for clustering using convex fusion penalties. In Proceedings of the 28th International Conference on Machine Learning, pp. 745–752. Cited by: §3.
  • [15] S. Hyun, K. Lin, M. G’Sell, and R. J. Tibshirani (2018) Post-selection inference for changepoint detection algorithms with application to copy number variation data. arXiv preprint arXiv:1812.03644. Cited by: §1.
  • [16] M. Karasuyama, N. Harada, M. Sugiyama, and I. Takeuchi (2012) Multi-parametric solution-path algorithm for instance-weighted support vector machines. Machine Learning 88 (3), pp. 297–330. Cited by: §3.
  • [17] M. Karasuyama and I. Takeuchi (2010) Nonlinear regularization path for quadratic loss support vector machines. IEEE Transactions on Neural Networks 22 (10), pp. 1613–1625. Cited by: §3.
  • [18] G. Lee and C. Scott (2007) The one class support vector machine solution path. In Proc. of ICASSP 2007, pp. II521–II524. Cited by: §3.
  • [19] J. D. Lee, D. L. Sun, Y. Sun, J. E. Taylor, et al. (2016) Exact post-selection inference, with application to the lasso. The Annals of Statistics 44 (3), pp. 907–927. Cited by: §1, §1, §1, §1, §2, §2, §2, §2, footnote 2.
  • [20] H. Leeb, B. M. Pötscher, et al. (2006) Can one estimate the conditional distribution of post-model-selection estimators?. The Annals of Statistics 34 (5), pp. 2554–2591. Cited by: §1.
  • [21] H. Leeb and B. M. Pötscher (2005) Model selection and inference: facts and fiction. Econometric Theory, pp. 21–59. Cited by: §1.
  • [22] K. Liu, J. Markovic, and R. Tibshirani (2018) More powerful post-selection inference, with application to the lasso. arXiv preprint arXiv:1801.09037. Cited by: §1, §2.
  • [23] R. Lockhart, J. Taylor, R. J. Tibshirani, and R. Tibshirani (2014) A significance test for the lasso. Annals of statistics 42 (2), pp. 413. Cited by: §1.
  • [24] J. R. Loftus and J. E. Taylor (2014) A significance test for forward stepwise model selection. arXiv preprint arXiv:1405.3920. Cited by: §1.
  • [25] J. R. Loftus and J. E. Taylor (2015) Selective inference in regression models with groups of variables. arXiv preprint arXiv:1511.01478. Cited by: §1, §1.
  • [26] J. R. Loftus (2015) Selective inference after cross-validation. arXiv preprint arXiv:1511.08866. Cited by: §3.4.
  • [27] K. Ogawa, M. Imamura, I. Takeuchi, and M. Sugiyama (2013) Infinitesimal annealing for training semi-supervised support vector machines. In International Conference on Machine Learning, pp. 897–905. Cited by: §3.
  • [28] M. R. Osborne, B. Presnell, and B. A. Turlach (2000) A new approach to variable selection in least squares problems. IMA journal of numerical analysis 20 (3), pp. 389–403. Cited by: §3.
  • [29] S. Panigrahi, J. Taylor, and A. Weinstein (2016) Bayesian post-selection inference in the linear model. arXiv preprint arXiv:1605.08824 28. Cited by: §1.
  • [30] B. M. Pötscher, U. Schneider, et al. (2010) Confidence sets based on penalized maximum likelihood estimators in gaussian regression. Electronic Journal of Statistics 4, pp. 334–360. Cited by: §1.
  • [31] S. Rosset and J. Zhu (2007) Piecewise linear regularized solution paths. Annals of Statistics 35, pp. 1012–1030. Cited by: §3.
  • [32] S. Suzumura, K. Nakagawa, Y. Umezu, K. Tsuda, and I. Takeuchi (2017) Selective inference for sparse high-order interaction models. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3338–3347. Cited by: §1.
  • [33] I. Takeuchi, K. Nomura, and T. Kanamori (2009)

    Nonparametric conditional density estimation using piecewise-linear solution path of kernel quantile regression

    .
    Neural Computation 21 (2), pp. 539–559. Cited by: §3.
  • [34] I. Takeuchi, T. Hongo, M. Sugiyama, and S. Nakajima (2013) Parametric task learning. Advances in Neural Information Processing Systems 26, pp. 1358–1366. Cited by: §3.
  • [35] I. Takeuchi and M. Sugiyama (2011) Target neighbor consistent feature weighting for nearest neighbor classification. In Advances in neural information processing systems, pp. 576–584. Cited by: §3.
  • [36] K. Tanizaki, N. Hashimoto, Y. Inatsu, H. Hontani, and I. Takeuchi (2020) Computing valid p-values for image segmentation by selective inference. Cited by: §1.
  • [37] J. Taylor, R. Lockhart, R. J. Tibshirani, and R. Tibshirani (2014) Post-selection adaptive inference for least angle regression and the lasso. arXiv preprint arXiv:1401.3889 354. Cited by: §1.
  • [38] Y. Terada and H. Shimodaira (2019) Selective inference after variable selection via multiscale bootstrap. arXiv preprint arXiv:1905.10573. Cited by: §1.
  • [39] X. Tian, J. Taylor, et al. (2018) Selective inference with a randomized response. The Annals of Statistics 46 (2), pp. 679–710. Cited by: §1, §1.
  • [40] R. J. Tibshirani, J. Taylor, R. Lockhart, and R. Tibshirani (2016) Exact post-selection inference for sequential regression procedures. Journal of the American Statistical Association 111 (514), pp. 600–620. Cited by: §1, §1, §1, §1, §2, §2, §2, §2, §3.1, §4.
  • [41] K. Tsuda (2007) Entire regularization paths for graph data. In In Proc. of ICML 2007, pp. 919–925. Cited by: §3.
  • [42] F. Yang, R. F. Barber, P. Jain, and J. Lafferty (2016) Selective inference for group-sparse linear models. In Advances in Neural Information Processing Systems, pp. 2469–2477. Cited by: §1.