Rapid-INR: Storage Efficient CPU-free DNN Training Using Implicit Neural Representation

06/29/2023
by   Hanqiu Chen, et al.
0

Implicit Neural Representation (INR) is an innovative approach for representing complex shapes or objects without explicitly defining their geometry or surface structure. Instead, INR represents objects as continuous functions. Previous research has demonstrated the effectiveness of using neural networks as INR for image compression, showcasing comparable performance to traditional methods such as JPEG. However, INR holds potential for various applications beyond image compression. This paper introduces Rapid-INR, a novel approach that utilizes INR for encoding and compressing images, thereby accelerating neural network training in computer vision tasks. Our methodology involves storing the whole dataset directly in INR format on a GPU, mitigating the significant data communication overhead between the CPU and GPU during training. Additionally, the decoding process from INR to RGB format is highly parallelized and executed on-the-fly. To further enhance compression, we propose iterative and dynamic pruning, as well as layer-wise quantization, building upon previous work. We evaluate our framework on the image classification task, utilizing the ResNet-18 backbone network and three commonly used datasets with varying image sizes. Rapid-INR reduces memory consumption to only 5 6× speedup over the PyTorch training pipeline, as well as a maximum 1.2x speedup over the DALI training pipeline, with only a marginal decrease in accuracy. Importantly, Rapid-INR can be readily applied to other computer vision tasks and backbone networks with reasonable engineering efforts. Our implementation code is publicly available at https://github.com/sharc-lab/Rapid-INR.

READ FULL TEXT

page 1

page 3

page 4

page 6

page 7

research
06/22/2023

Data-Free Backbone Fine-Tuning for Pruned Neural Networks

Model compression techniques reduce the computational load and memory co...
research
11/17/2021

Accelerating JPEG Decompression on GPUs

The JPEG compression format has been the standard for lossy image compre...
research
06/11/2020

Convolutional neural networks compression with low rank and sparse tensor decompositions

Convolutional neural networks show outstanding results in a variety of c...
research
09/18/2018

Albumentations: fast and flexible image augmentations

Data augmentation is a commonly used technique for increasing both the s...
research
11/22/2021

FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks

Federated Learning (FL) is a distributed learning paradigm that can lear...
research
08/31/2023

E3CM: Epipolar-Constrained Cascade Correspondence Matching

Accurate and robust correspondence matching is of utmost importance for ...
research
08/10/2023

High-performance Data Management for Whole Slide Image Analysis in Digital Pathology

When dealing with giga-pixel digital pathology in whole-slide imaging, a...

Please sign up or login with your details

Forgot password? Click here to reset