Boosting Mapping Functionality of Neural Networks via Latent Feature Generation based on Reversible Learning

10/21/2019
by   Jongmin Yu, et al.
0

This paper addresses a boosting method for mapping functionality of neural networks in visual recognition such as image classification and face recognition. We present reversible learning for generating and learning latent features using the network itself. By generating latent features corresponding to hard samples and applying the generated features in a training stage, reversible learning can improve a mapping functionality without additional data augmentation or handling the bias of dataset. We demonstrate an efficiency of the proposed method on the MNIST,Cifar-10/100, and Extremely Biased and poorly categorized dataset (EBPC dataset). The experimental results show that the proposed method can outperform existing state-of-the-art methods in visual recognition. Extensive analysis shows that our method can efficiently improve the mapping capability of a network.

READ FULL TEXT
research
11/01/2018

Reversible Adversarial Examples

Deep Neural Networks have recently led to significant improvement in man...
research
10/20/2019

Boosting Network Weight Separability via Feed-Backward Reconstruction

This paper proposes a new evaluation metric and boosting method for weig...
research
05/04/2023

LatentAugment: Dynamically Optimized Latent Probabilities of Data Augmentation

Although data augmentation is a powerful technique for improving the per...
research
01/17/2021

Removing Undesirable Feature Contributions Using Out-of-Distribution Data

Several data augmentation methods deploy unlabeled-in-distribution (UID)...
research
05/10/2023

Generative Steganographic Flow

Generative steganography (GS) is a new data hiding manner, featuring dir...
research
11/06/2019

Reversible Adversarial Example based on Reversible Image Transformation

At present there are many companies that take the most advanced Deep Neu...
research
04/10/2022

Explaining Deep Convolutional Neural Networks via Latent Visual-Semantic Filter Attention

Interpretability is an important property for visual models as it helps ...

Please sign up or login with your details

Forgot password? Click here to reset