DeepAI AI Chat
Log In Sign Up

Fast and Low-Memory Deep Neural Networks Using Binary Matrix Factorization

by   Alireza Bordbar, et al.

Despite the outstanding performance of deep neural networks in different applications, they are still computationally extensive and require a great number of memories. This motivates more research on reducing the resources required for implementing such networks. An efficient approach addressed for this purpose is matrix factorization, which has been shown to be effective on different networks. In this paper, we utilize binary matrix factorization and show its great efficiency in reducing the required number of resources in deep neural networks. In effect, this technique can lead to the practical implementation of such networks.


page 1

page 2

page 3

page 4


In-training Matrix Factorization for Parameter-frugal Neural Machine Translation

In this paper, we propose the use of in-training matrix factorization to...

Greenformer: Factorization Toolkit for Efficient Deep Neural Networks

While the recent advances in deep neural networks (DNN) bring remarkable...

Efficient NTK using Dimensionality Reduction

Recently, neural tangent kernel (NTK) has been used to explain the dynam...

Deep Neural Convolutive Matrix Factorization for Articulatory Representation Decomposition

Most of the research on data-driven speech representation learning has f...

LU-Net: Invertible Neural Networks Based on Matrix Factorization

LU-Net is a simple and fast architecture for invertible neural networks ...

Feature-Based Matrix Factorization

Recommender system has been more and more popular and widely used in man...

An Analysis of Dropout for Matrix Factorization

Dropout is a simple yet effective algorithm for regularizing neural netw...