Dynamic Clone Transformer for Efficient Convolutional Neural Netwoks

06/12/2021
by   Longqing Ye, et al.
0

Convolutional networks (ConvNets) have shown impressive capability to solve various vision tasks. Nevertheless, the trade-off between performance and efficiency is still a challenge for a feasible model deployment on resource-constrained platforms. In this paper, we introduce a novel concept termed multi-path fully connected pattern (MPFC) to rethink the interdependencies of topology pattern, accuracy and efficiency for ConvNets. Inspired by MPFC, we further propose a dual-branch module named dynamic clone transformer (DCT) where one branch generates multiple replicas from inputs and another branch reforms those clones through a series of difference vectors conditional on inputs itself to produce more variants. This operation allows the self-expansion of channel-wise information in a data-driven way with little computational cost while providing sufficient learning capacity, which is a potential unit to replace computationally expensive pointwise convolution as an expansion layer in the bottleneck structure.

READ FULL TEXT
research
05/25/2022

MoCoViT: Mobile Convolutional Vision Transformer

Recently, Transformer networks have achieved impressive results on a var...
research
08/27/2023

MB-TaylorFormer: Multi-branch Efficient Transformer Expanded by Taylor Formula for Image Dehazing

In recent years, Transformer networks are beginning to replace pure conv...
research
04/10/2023

FreConv: Frequency Branch-and-Integration Convolutional Networks

Recent researches indicate that utilizing the frequency information of i...
research
10/13/2021

Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement

Curriculum learning begins to thrive in the speech enhancement area, whi...
research
06/08/2022

DRHDR: A Dual branch Residual Network for Multi-Bracket High Dynamic Range Imaging

We introduce DRHDR, a Dual branch Residual Convolutional Neural Network ...
research
12/02/2021

Vision Pair Learning: An Efficient Training Framework for Image Classification

Transformer is a potentially powerful architecture for vision tasks. Alt...
research
12/07/2022

Name Your Colour For the Task: Artificially Discover Colour Naming via Colour Quantisation Transformer

The long-standing theory that a colour-naming system evolves under the d...

Please sign up or login with your details

Forgot password? Click here to reset