Time complexity of in-memory solution of linear systems

05/09/2020
by   Zhong Sun, et al.
0

In-memory computing with crosspoint resistive memory arrays has been shown to accelerate data-centric computations such as the training and inference of deep neural networks, thanks to the high parallelism endowed by physical rules in the electrical circuits. By connecting crosspoint arrays with negative feedback amplifiers, it is possible to solve linear algebraic problems such as linear systems and matrix eigenvectors in just one step. Based on the theory of feedback circuits, we study the dynamics of the solution of linear systems within a memory array, showing that the time complexity of the solution is free of any direct dependence on the problem size N, rather it is governed by the minimal eigenvalue of an associated matrix of the coefficient matrix. We show that, when the linear system is modeled by a covariance matrix, the time complexity is O(logN) or O(1). In the case of sparse positive-definite linear systems, the time complexity is solely determined by the minimal eigenvalue of the coefficient matrix. These results demonstrate the high speed of the circuit for solving linear systems in a wide range of applications, thus supporting in-memory computing as a strong candidate for future big data and machine learning accelerators.

READ FULL TEXT
research
05/09/2020

In-memory eigenvector computation in time O(1)

In-memory computing with crosspoint resistive memory arrays has gained e...
research
05/05/2020

One-step regression and classification with crosspoint resistive memory arrays

Machine learning has been getting a large attention in the recent years,...
research
03/04/2022

Beyond Time Complexity: Data Movement Complexity Analysis for Matrix Multiplication

Data movement is becoming the dominant contributor to the time and energ...
research
03/03/2021

Linear Bandit Algorithms with Sublinear Time Complexity

We propose to accelerate existing linear bandit algorithms to achieve pe...
research
03/10/2023

High-Speed and Energy-Efficient Non-Volatile Silicon Photonic Memory Based on Heterogeneously Integrated Memresonator

Recently, interest in programmable photonics integrated circuits has gro...
research
07/17/2021

Sparse Bayesian Learning with Diagonal Quasi-Newton Method For Large Scale Classification

Sparse Bayesian Learning (SBL) constructs an extremely sparse probabilis...
research
11/09/2022

In-memory factorization of holographic perceptual representations

Disentanglement of constituent factors of a sensory signal is central to...

Please sign up or login with your details

Forgot password? Click here to reset