DeepAI AI Chat
Log In Sign Up

Large Dimensional Independent Component Analysis: Statistical Optimality and Computational Tractability

by   Arnab Auddy, et al.

In this paper, we investigate the optimal statistical performance and the impact of computational constraints for independent component analysis (ICA). Our goal is twofold. On the one hand, we characterize the precise role of dimensionality on sample complexity and statistical accuracy, and how computational consideration may affect them. In particular, we show that the optimal sample complexity is linear in dimensionality, and interestingly, the commonly used sample kurtosis-based approaches are necessarily suboptimal. However, the optimal sample complexity becomes quadratic, up to a logarithmic factor, in the dimension if we restrict ourselves to estimates that can be computed with low-degree polynomial algorithms. On the other hand, we develop computationally tractable estimates that attain both the optimal sample complexity and minimax optimal rates of convergence. We study the asymptotic properties of the proposed estimates and establish their asymptotic normality that can be readily used for statistical inferences. Our method is fairly easy to implement and numerical experiments are presented to further demonstrate its practical merits.


Information-Computation Tradeoffs for Learning Margin Halfspaces with Random Classification Noise

We study the problem of PAC learning γ-margin halfspaces with Random Cla...

Provable Benefit of Mixup for Finding Optimal Decision Boundaries

We investigate how pair-wise data augmentation techniques like Mixup aff...

Matching Component Analysis for Transfer Learning

We introduce a new Procrustes-type method called matching component anal...

Refined Analysis of the Asymptotic Complexity of the Number Field Sieve

The classical heuristic complexity of the Number Field Sieve (NFS) is th...

Linear Systems can be Hard to Learn

In this paper, we investigate when system identification is statisticall...

Learning bounded subsets of L_p

We study learning problems in which the underlying class is a bounded su...

The Exact Sample Complexity Gain from Invariances for Kernel Regression on Manifolds

In practice, encoding invariances into models helps sample complexity. I...