On the algorithm of best approximation by low rank matrices in the Chebyshev norm

01/28/2022
by   Stanislav Morozov, et al.
0

The low-rank matrix approximation problem is ubiquitous in computational mathematics. Traditionally, this problem is solved in spectral or Frobenius norms, where the accuracy of the approximation is related to the rate of decrease of the singular values of the matrix. However, recent results indicate that this requirement is not necessary for other norms. In this paper, we propose a method for solving the low-rank approximation problem in the Chebyshev norm, which is capable of efficiently constructing accurate approximations for matrices, whose singular values do not decrease or decrease slowly.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2022

On the optimal rank-1 approximation of matrices in the Chebyshev norm

The problem of low rank approximation is ubiquitous in science. Traditio...
research
01/15/2013

Matrix Approximation under Local Low-Rank Assumption

Matrix approximation is a common tool in machine learning for building a...
research
03/18/2017

Spectrum Estimation from a Few Entries

Singular values of a data in a matrix form provide insights on the struc...
research
11/14/2018

Robust low-rank multilinear tensor approximation for a joint estimation of the multilinear rank and the loading matrices

In order to compute the best low-rank tensor approximation using the Mul...
research
01/04/2021

A Block Bidiagonalization Method for Fixed-Precision Low-Rank Matrix Approximation

We present randUBV, a randomized algorithm for matrix sketching based on...
research
05/16/2021

Fast randomized numerical rank estimation

Matrices with low-rank structure are ubiquitous in scientific computing....
research
05/04/2023

On the Unreasonable Effectiveness of Single Vector Krylov Methods for Low-Rank Approximation

Krylov subspace methods are a ubiquitous tool for computing near-optimal...

Please sign up or login with your details

Forgot password? Click here to reset