Efficient Training of Very Deep Neural Networks for Supervised Hashing

11/14/2015
by   Ziming Zhang, et al.
0

In this paper, we propose training very deep neural networks (DNNs) for supervised learning of hash codes. Existing methods in this context train relatively "shallow" networks limited by the issues arising in back propagation (e.e. vanishing gradients) as well as computational efficiency. We propose a novel and efficient training algorithm inspired by alternating direction method of multipliers (ADMM) that overcomes some of these limitations. Our method decomposes the training process into independent layer-wise local updates through auxiliary variables. Empirically we observe that our training algorithm always converges and its computational complexity is linearly proportional to the number of edges in the networks. Empirically we manage to train DNNs with 64 hidden layers and 1024 nodes per layer for supervised hashing in about 3 hours using a single GPU. Our proposed very deep supervised hashing (VDSH) method significantly outperforms the state-of-the-art on several benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/09/2023

A Novel Method for improving accuracy in neural network by reinstating traditional back propagation technique

Deep learning has revolutionized industries like computer vision, natura...
research
05/01/2021

Stochastic Block-ADMM for Training Deep Networks

In this paper, we propose Stochastic Block-ADMM as an approach to train ...
research
06/15/2023

Sampling-Based Techniques for Training Deep Neural Networks with Limited Computational Resources: A Scalability Evaluation

Deep neural networks are superior to shallow networks in learning comple...
research
10/22/2019

Vanishing Nodes: Another Phenomenon That Makes Training Deep Neural Networks Difficult

It is well known that the problem of vanishing/exploding gradients is a ...
research
05/20/2016

Functional Hashing for Compressing Neural Networks

As the complexity of deep neural networks (DNNs) trend to grow to absorb...
research
04/28/2015

Speeding Up Neural Networks for Large Scale Classification using WTA Hashing

In this paper we propose to use the Winner Takes All hashing technique t...
research
10/20/2022

LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness

Recent studies show that training deep neural networks (DNNs) with Lipsc...

Please sign up or login with your details

Forgot password? Click here to reset