-
Hierarchical Residual Attention Network for Single Image Super-Resolution
Convolutional neural networks are the most successful models in single i...
read it
-
Channel-wise and Spatial Feature Modulation Network for Single Image Super-Resolution
The performance of single image super-resolution has achieved significan...
read it
-
Single Image Super-Resolution via a Holistic Attention Network
Informative features play a crucial role in the single image super-resol...
read it
-
Mixed Link Networks
Basing on the analysis by revealing the equivalence of modern networks, ...
read it
-
Single Image Super-Resolution via Residual Neuron Attention Networks
Deep Convolutional Neural Networks (DCNNs) have achieved impressive perf...
read it
-
Interpretable Detail-Fidelity Attention Network for Single Image Super-Resolution
Benefiting from the strong capabilities of deep CNNs for feature represe...
read it
-
Multi-Level Feature Fusion Mechanism for Single Image Super-Resolution
Convolution neural network (CNN) has been widely used in Single Image Su...
read it
Triple Attention Mixed Link Network for Single Image Super Resolution
Single image super resolution is of great importance as a low-level computer vision task. Recent approaches with deep convolutional neural networks have achieved im-pressive performance. However, existing architectures have limitations due to the less sophisticated structure along with less strong representational power. In this work, to significantly enhance the feature representation, we proposed Triple Attention mixed link Network (TAN) which consists of 1) three different aspects (i.e., kernel, spatial and channel) of attention mechanisms and 2) fu-sion of both powerful residual and dense connections (i.e., mixed link). Specifically, the network with multi kernel learns multi hierarchical representations under different receptive fields. The output features are recalibrated by the effective kernel and channel attentions and feed into next layer partly residual and partly dense, which filters the information and enable the network to learn more powerful representations. The features finally pass through the spatial attention in the reconstruction network which generates a fusion of local and global information, let the network restore more details and improves the quality of reconstructed images. Thanks to the diverse feature recalibrations and the advanced information flow topology, our proposed model is strong enough to per-form against the state-of-the-art methods on the bench-mark evaluations.
READ FULL TEXT
Comments
There are no comments yet.