On the Computation of Neumann Series

07/18/2017
by   Vassil Dimitrov, et al.
0

This paper proposes new factorizations for computing the Neumann series. The factorizations are based on fast algorithms for small prime sizes series and the splitting of large sizes into several smaller ones. We propose a different basis for factorizations other than the well-known binary and ternary basis. We show that is possible to reduce the overall complexity for the usual binary decomposition from 2log2(N)-2 multiplications to around 1.72log2(N)-2 using a basis of size five. Merging different basis we can demonstrate that we can build fast algorithms for particular sizes. We also show the asymptotic case where one can reduce the number of multiplications to around 1.70log2(N)-2. Simulations are performed for applications in the context of wireless communications and image rendering, where is necessary perform large sized matrices inversion.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2016

Fast Computation of the Nth Term of an Algebraic Series over a Finite Prime Field

We address the question of computing one selected term of an algebraic p...
research
07/21/2018

Fast Matrix Inversion and Determinant Computation for Polarimetric Synthetic Aperture Radar

This paper introduces a fast algorithm for simultaneous inversion and de...
research
09/23/2022

An Algebraic-Geometry Approach to Prime Factorization

New algorithms for prime factorization that outperform the existing ones...
research
11/10/2020

LinCbO: fast algorithm for computation of the Duquenne-Guigues basis

We propose and evaluate a novel algorithm for computation of the Duquenn...
research
02/12/2021

User manual for bch, a program for the fast computation of the Baker-Campbell-Hausdorff and similar series

This manual describes bch, an efficient program written in the C program...
research
05/02/2022

PSCNN: A 885.86 TOPS/W Programmable SRAM-based Computing-In-Memory Processor for Keyword Spotting

Computing-in-memory (CIM) has attracted significant attentions in recent...
research
09/14/2017

Binary-decomposed DCNN for accelerating computation and compressing model without retraining

Recent trends show recognition accuracy increasing even more profoundly....

Please sign up or login with your details

Forgot password? Click here to reset