Feature-dependent Cross-Connections in Multi-Path Neural Networks

06/24/2020
by   Dumindu Tissera, et al.
0

Learning a particular task from a dataset, samples in which originate from diverse contexts, is challenging, and usually addressed by deepening or widening standard neural networks. As opposed to conventional network widening, multi-path architectures restrict the quadratic increment of complexity to a linear scale. However, existing multi-column/path networks or model ensembling methods do not consider any feature-dependent allocation of parallel resources, and therefore, tend to learn redundant features. Given a layer in a multi-path network, if we restrict each path to learn a context-specific set of features and introduce a mechanism to intelligently allocate incoming feature maps to such paths, each path can specialize in a certain context, reducing the redundancy and improving the quality of extracted features. This eventually leads to better-optimized usage of parallel resources. To do this, we propose inserting feature-dependent cross-connections between parallel sets of feature maps in successive layers. The weights of these cross-connections are learned based on the input features of the particular layer. Our multi-path networks show improved image recognition accuracy at a similar complexity compared to conventional and state-of-the-art methods for deepening, widening and adaptive feature extracting, in both small and large scale datasets.

READ FULL TEXT

page 1

page 6

research
07/06/2021

End-To-End Data-Dependent Routing in Multi-Path Neural Networks

Neural networks are known to give better performance with increased dept...
research
01/09/2022

ThreshNet: An Efficient DenseNet using Threshold Mechanism to Reduce Connections

With the continuous development of neural networks in computer vision ta...
research
02/15/2021

VA-RED^2: Video Adaptive Redundancy Reduction

Performing inference on deep learning models for videos remains a challe...
research
07/26/2019

Context-Aware Multipath Networks

Making a single network effectively address diverse contexts---learning ...
research
01/26/2022

On The Energy Statistics of Feature Maps in Pruning of Neural Networks with Skip-Connections

We propose a new structured pruning framework for compressing Deep Neura...
research
12/14/2022

Multi-Scale Feature Fusion Transformer Network for End-to-End Single Channel Speech Separation

Recently studies on time-domain audio separation networks (TasNets) have...
research
08/10/2023

Vision Backbone Enhancement via Multi-Stage Cross-Scale Attention

Convolutional neural networks (CNNs) and vision transformers (ViTs) have...

Please sign up or login with your details

Forgot password? Click here to reset