CNN with large memory layers

01/27/2021
by   Victor Lempitsky, et al.
0

This work is centred around the recently proposed product key memory structure <cit.>, implemented for a number of computer vision applications. The memory structure can be regarded as a simple computation primitive suitable to be augmented to nearly all neural network architectures. The memory block allows implementing sparse access to memory with square root complexity scaling with respect to the memory capacity. The latter scaling is possible due to the incorporation of Cartesian product space decomposition of the key space for the nearest neighbour search. We have tested the memory layer on the classification, image reconstruction and relocalization problems and found that for some of those, the memory layers can provide significant speed/accuracy improvement with the high utilization of the key-value elements, while others require more careful fine-tuning and suffer from dying keys. To tackle the later problem we have introduced a simple technique of memory re-initialization which helps us to eliminate unused key-value pairs from the memory and engage them in training again. We have conducted various experiments and got improvements in speed and accuracy for classification and PoseNet relocalization models. We showed that the re-initialization has a huge impact on a toy example of randomly labeled data and observed some gains in performance on the image classification task. We have also demonstrated the generalization property perseverance of the large memory layers on the relocalization problem, while observing the spatial correlations between the images and the selected memory cells.

READ FULL TEXT

page 39

page 42

research
07/10/2019

Large Memory Layers with Product Keys

This paper introduces a structured memory which can be easily integrated...
research
03/25/2019

Depth Augmented Networks with Optimal Fine-tuning

Convolutional neural networks (CNN) have been shown to achieve state-of-...
research
03/11/2019

Scaling up deep neural networks: a capacity allocation perspective

Following the recent work on capacity allocation, we formulate the conje...
research
07/06/2023

Focused Transformer: Contrastive Training for Context Scaling

Large language models have an exceptional capability to incorporate new ...
research
05/21/2018

A Simple Cache Model for Image Recognition

Training large-scale image recognition models is computationally expensi...
research
06/13/2021

Memory-efficient Transformers via Top-k Attention

Following the success of dot-product attention in Transformers, numerous...
research
07/07/2021

Differentiable Random Access Memory using Lattices

We introduce a differentiable random access memory module with O(1) perf...

Please sign up or login with your details

Forgot password? Click here to reset