Improved Generalization Bound and Learning of Sparsity Patterns for Data-Driven Low-Rank Approximation

09/17/2022
by   Shinsaku Sakaue, et al.
0

Learning sketching matrices for fast and accurate low-rank approximation (LRA) has gained increasing attention. Recently, Bartlett, Indyk, and Wagner (COLT 2022) presented a generalization bound for the learning-based LRA. Specifically, for rank-k approximation using an m × n learned sketching matrix with s non-zeros in each column, they proved an Õ(nsm) bound on the fat shattering dimension (Õ hides logarithmic factors). We build on their work and make two contributions. 1. We present a better Õ(nsk) bound (k ≤ m). En route to obtaining this result, we give a low-complexity Goldberg–Jerrum algorithm for computing pseudo-inverse matrices, which would be of independent interest. 2. We alleviate an assumption of the previous study that sketching matrices have a fixed sparsity pattern. We prove that learning positions of non-zeros increases the fat shattering dimension only by O(nslog n). In addition, experiments confirm the practical benefit of learning sparsity patterns.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2022

Learning Sparsity and Randomness for Data-driven Low Rank Approximation

Learning-based low rank approximation algorithms can significantly impro...
research
09/01/2023

Data-Driven Projection for Reducing Dimensionality of Linear Programs: Generalization Bound and Learning Methods

This paper studies a simple data-driven approach to high-dimensional lin...
research
10/03/2018

Enhanced image approximation using shifted rank-1 reconstruction

Low rank approximation has been extensively studied in the past. It is m...
research
10/30/2019

Learning-Based Low-Rank Approximations

We introduce a "learning-based" algorithm for the low-rank decomposition...
research
03/08/2021

Low-Rank Sinkhorn Factorization

Several recent applications of optimal transport (OT) theory to machine ...
research
06/11/2023

Learning the Positions in CountSketch

We consider sketching algorithms which first compress data by multiplica...
research
08/19/2020

LOCUS: A Novel Decomposition Method for Brain Network Connectivity Matrices using Low-rank Structure with Uniform Sparsity

Network-oriented research has been increasingly popular in many scientif...

Please sign up or login with your details

Forgot password? Click here to reset