Fast and Low-Memory Deep Neural Networks Using Binary Matrix Factorization

10/24/2022
by   Alireza Bordbar, et al.
0

Despite the outstanding performance of deep neural networks in different applications, they are still computationally extensive and require a great number of memories. This motivates more research on reducing the resources required for implementing such networks. An efficient approach addressed for this purpose is matrix factorization, which has been shown to be effective on different networks. In this paper, we utilize binary matrix factorization and show its great efficiency in reducing the required number of resources in deep neural networks. In effect, this technique can lead to the practical implementation of such networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2019

In-training Matrix Factorization for Parameter-frugal Neural Machine Translation

In this paper, we propose the use of in-training matrix factorization to...
research
09/14/2021

Greenformer: Factorization Toolkit for Efficient Deep Neural Networks

While the recent advances in deep neural networks (DNN) bring remarkable...
research
10/10/2022

Efficient NTK using Dimensionality Reduction

Recently, neural tangent kernel (NTK) has been used to explain the dynam...
research
04/01/2022

Deep Neural Convolutive Matrix Factorization for Articulatory Representation Decomposition

Most of the research on data-driven speech representation learning has f...
research
02/28/2017

Deep Forest: Towards An Alternative to Deep Neural Networks

In this paper, we propose gcForest, a decision tree ensemble approach wi...
research
02/21/2023

LU-Net: Invertible Neural Networks Based on Matrix Factorization

LU-Net is a simple and fast architecture for invertible neural networks ...
research
05/09/2023

Rotation Synchronization via Deep Matrix Factorization

In this paper we address the rotation synchronization problem, where the...

Please sign up or login with your details

Forgot password? Click here to reset