Run-Time-Reconfigurable Multi-Precision Floating-Point Matrix Multiplier Intellectual Property Core on FPGA

10/11/2019
by   Arish S, et al.
0

In todays world, high-power computing applications such as image processing, digital signal processing, graphics, and robotics require enormous computing power. These applications use matrix operations, especially matrix multiplication. Multiplication operations require a lot of computational time and are also complex in design. We can use field-programmable gate arrays as low-cost hardware accelerators along with a low-cost general-purpose processor instead of a high-cost application-specific processor for such applications. In this work, we employ an efficient Strassens algorithm for matrix multiplication and a highly efficient run-time-reconfigurable floating-point multiplier for matrix element multiplication. The run-time-reconfigurable floating-point multiplier is implemented with custom floating-point format for variable-precision applications. A very efficient combination of Karatsuba algorithm and Urdhva Tiryagbhyam algorithm is used to implement the binary multiplier. This design can effectively adjust the power and delay requirements according to different accuracy requirements by reconfiguring itself during run time.

READ FULL TEXT
research
09/29/2019

Run-time reconfigurable multi-precision floating point multiplier design for high speed, low-power applications

Floating point multiplication is one of the crucial operations in many a...
research
10/01/2019

An efficient floating point multiplier design for high speed applications using Karatsuba algorithm and Urdhva-Tiryagbhyam algorithm

Floating point multiplication is a crucial operation in high power compu...
research
04/28/2020

Run-Time Accuracy Reconfigurable Stochastic Computing for Dynamic Reliability and Power Management

In this paper, we propose a novel accuracy-reconfigurable stochastic com...
research
10/20/2016

Accelerating BLAS on Custom Architecture through Algorithm-Architecture Co-design

Basic Linear Algebra Subprograms (BLAS) play key role in high performanc...
research
06/24/2020

Lower Bounds on Rate of Convergence of Matrix Products in All Pairs Shortest Path of Social Network

With the rapid development of social network applications, social networ...
research
10/29/2020

Toward Lattice QCD On Billion Core Approximate Computers

We present evidence of the feasibility of using billion core approximate...
research
07/31/2016

Data-Driven Background Subtraction Algorithm for in-Camera Acceleration in Thermal Imagery

Detection of moving objects in videos is a crucial step towards successf...

Please sign up or login with your details

Forgot password? Click here to reset