A Self-Penalizing Objective Function for Scalable Interaction Detection

11/24/2020
by   Keli Liu, et al.
0

We tackle the problem of nonparametric variable selection with a focus on discovering interactions between variables. With p variables there are O(p^s) possible order-s interactions making exhaustive search infeasible. It is nonetheless possible to identify the variables involved in interactions with only linear computation cost, O(p). The trick is to maximize a class of parametrized nonparametric dependence measures which we call metric learning objectives; the landscape of these nonconvex objective functions is sensitive to interactions but the objectives themselves do not explicitly model interactions. Three properties make metric learning objectives highly attractive: (a) The stationary points of the objective are automatically sparse (i.e. performs selection) – no explicit ℓ_1 penalization is needed. (b) All stationary points of the objective exclude noise variables with high probability. (c) Guaranteed recovery of all signal variables without needing to reach the objective's global maxima or special stationary points. The second and third properties mean that all our theoretical results apply in the practical case where one uses gradient ascent to maximize the metric learning objective. While not all metric learning objectives enjoy good statistical power, we design an objective based on ℓ_1 kernels that does exhibit favorable power: it recovers (i) main effects with n ∼log p samples, (ii) hierarchical interactions with n ∼log p samples and (iii) order-s pure interactions with n ∼ p^2(s-1)log p samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2021

Taming Nonconvexity in Kernel Feature Selection—Favorable Properties of the Laplace Kernel

Kernel-based feature selection is an important tool in nonparametric sta...
research
01/20/2022

Adaptive neighborhood Metric learning

In this paper, we reveal that metric learning would suffer from serious ...
research
05/10/2013

Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima

We provide novel theoretical results regarding local optima of regulariz...
research
10/21/2019

Learning Hierarchical Feature Space Using CLAss-specific Subspace Multiple Kernel – Metric Learning for Classification

Metric learning for classification has been intensively studied over the...
research
06/01/2023

Class Anchor Margin Loss for Content-Based Image Retrieval

The performance of neural networks in content-based image retrieval (CBI...
research
07/19/2019

Reluctant Interaction Modeling

Including pairwise interactions between the predictors of a regression m...

Please sign up or login with your details

Forgot password? Click here to reset