Compressing Deep Neural Networks: A New Hashing Pipeline Using Kac's Random Walk Matrices

01/09/2018
by   Jack Holder, et al.
0

The popularity of deep learning is increasing by the day. However, despite the recent advancements in hardware, deep neural networks remain computationally intensive. Recent work has shown that by preserving the angular distance between vectors, random feature maps are able to reduce dimensionality without introducing bias to the estimator. We test a variety of established hashing pipelines as well as a new approach using Kac's random walk matrices. We demonstrate that this method achieves similar accuracy to existing pipelines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2022

Discovering Intrinsic Reward with Contrastive Random Walk

The aim of this paper is to demonstrate the efficacy of using Contrastiv...
research
08/21/2023

Characterization of random walks on space of unordered trees using efficient metric simulation

The simple random walk on ℤ^p shows two drastically different behaviours...
research
10/01/2020

N odeS ig: Random Walk Diffusion meets Hashing for Scalable Graph Embeddings

Learning node representations is a crucial task with a plethora of inter...
research
11/02/2022

Time-aware Random Walk Diffusion to Improve Dynamic Graph Learning

How can we augment a dynamic graph for improving the performance of dyna...
research
05/01/2018

Spiking Neural Algorithms for Markov Process Random Walk

The random walk is a fundamental stochastic process that underlies many ...
research
06/22/2018

PCA of high dimensional random walks with comparison to neural network training

One technique to visualize the training of neural networks is to perform...
research
10/07/2019

Deep Hyperedges: a Framework for Transductive and Inductive Learning on Hypergraphs

From social networks to protein complexes to disease genomes to visual d...

Please sign up or login with your details

Forgot password? Click here to reset