Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses
This paper proposes a simple approach to derive efficient error bounds for learning multiple components with sparsity-inducing regularization. Indeed, we show that for such regularization schemes, known decompositions of the Rademacher complexity over the components can be used in a more efficient manner to result in tighter bounds without too much effort. We give examples of application to switching regression and center-based clustering/vector quantization. Then, the complete workflow is illustrated on the problem of subspace clustering, for which decomposition results were not previously available. For all these problems, the proposed approach yields risk bounds with mild dependencies on the number of components and completely removes this dependency for nonconvex regularization schemes that could not be handled by previous methods.
READ FULL TEXT