PLAM: a Posit Logarithm-Approximate Multiplier for Power Efficient Posit-based DNNs

02/18/2021
by   Raul Murillo, et al.
0

The Posit Number System was introduced in 2017 as a replacement for floating-point numbers. Since then, the community has explored its application in Neural Network related tasks and produced some unit designs which are still far from being competitive with their floating-point counterparts. This paper proposes a Posit Logarithm-Approximate Multiplication (PLAM) scheme to significantly reduce the complexity of posit multipliers, the most power-hungry units within Deep Neural Network architectures. When comparing with state-of-the-art posit multipliers, experiments show that the proposed technique reduces the area, power, and delay of hardware multipliers up to 72.86

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset