BMF: Block matrix approach to factorization of large scale data

01/02/2019
by   Prasad G Bhavana, et al.
0

Matrix Factorization (MF) on large scale matrices is computationally as well as memory intensive task. Alternative convergence techniques are needed when the size of the input matrix is higher than the available memory on a Central Processing Unit (CPU) and Graphical Processing Unit (GPU). While alternating least squares (ALS) convergence on CPU could take forever, loading all the required matrices on to GPU memory may not be possible when the dimensions are significantly higher. Hence we introduce a novel technique that is based on considering the entire data into a block matrix and relies on factorization at a block level.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2020

On Application of Block Kaczmarz Methods in Matrix Factorization

Matrix factorization techniques compute low-rank product approximations ...
research
03/03/2017

Decoupled Block-Wise ILU(k) Preconditioner on GPU

This research investigates the implementation mechanism of block-wise IL...
research
08/11/2018

Matrix Factorization on GPUs with Memory Optimization and Approximate Computing

Matrix factorization (MF) discovers latent features from observations, w...
research
07/21/2018

Fast Matrix Inversion and Determinant Computation for Polarimetric Synthetic Aperture Radar

This paper introduces a fast algorithm for simultaneous inversion and de...
research
09/17/2023

Analog Content-Addressable Memory from Complementary FeFETs

To address the increasing computational demands of artificial intelligen...
research
02/25/2015

On Convolutional Approximations to Linear Dimensionality Reduction Operators for Large Scale Data Processing

In this paper, we examine the problem of approximating a general linear ...

Please sign up or login with your details

Forgot password? Click here to reset