DeepAI
Log In Sign Up

Memory Efficient Adaptive Attention For Multiple Domain Learning

10/21/2021
by   Himanshu Pradeep Aswani, et al.
0

Training CNNs from scratch on new domains typically demands large numbers of labeled images and computations, which is not suitable for low-power hardware. One way to reduce these requirements is to modularize the CNN architecture and freeze the weights of the heavier modules, that is, the lower layers after pre-training. Recent studies have proposed alternative modular architectures and schemes that lead to a reduction in the number of trainable parameters needed to match the accuracy of fully fine-tuned CNNs on new domains. Our work suggests that a further reduction in the number of trainable parameters by an order of magnitude is possible. Furthermore, we propose that new modularization techniques for multiple domain learning should also be compared on other realistic metrics, such as the number of interconnections needed between the fixed and trainable modules, the number of training samples needed, the order of computations required and the robustness to partial mislabeling of the training data. On all of these criteria, the proposed architecture demonstrates advantages over or matches the current state-of-the-art.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/17/2021

LoRA: Low-Rank Adaptation of Large Language Models

The dominant paradigm of natural language processing consists of large-s...
06/19/2018

Spatio-Temporal Channel Correlation Networks for Action Classification

The work in this paper is driven by the question if spatio-temporal corr...
12/15/2021

An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network

In recent years, deep neural networks have known a wide success in vario...
10/21/2019

Separable Convolutional Eigen-Filters (SCEF): Building Efficient CNNs Using Redundancy Analysis

The high model complexity of deep learning algorithms enables remarkable...
07/03/2019

Spatially-Coupled Neural Network Architectures

In this work, we leverage advances in sparse coding techniques to reduce...
07/11/2022

Denoising single images by feature ensemble revisited

Image denoising is still a challenging issue in many computer vision sub...