Semantic Noise Modeling for Better Representation Learning

11/04/2016
by   Hyo-Eun Kim, et al.
0

Latent representation learned from multi-layered neural networks via hierarchical feature abstraction enables recent success of deep learning. Under the deep learning framework, generalization performance highly depends on the learned latent representation which is obtained from an appropriate training scenario with a task-specific objective on a designed network model. In this work, we propose a novel latent space modeling method to learn better latent representation. We designed a neural network model based on the assumption that good base representation can be attained by maximizing the total correlation between the input, latent, and output variables. From the base model, we introduce a semantic noise modeling method which enables class-conditional perturbation on latent space to enhance the representational power of learned latent feature. During training, latent vector representation can be stochastically perturbed by a modeled class-conditional additive noise while maintaining its original semantic feature. It implicitly brings the effect of semantic augmentation on the latent space. The proposed model can be easily learned by back-propagation with common gradient-based optimization algorithms. Experimental results show that the proposed method helps to achieve performance benefits against various previous approaches. We also provide the empirical analyses for the proposed class-conditional perturbation process including t-SNE visualization.

READ FULL TEXT

page 8

page 11

page 12

page 13

page 14

page 15

research
10/18/2020

Variational Capsule Encoder

We propose a novel capsule network based variational encoder architectur...
research
03/16/2022

Learning Representation for Bayesian Optimization with Collision-free Regularization

Bayesian optimization has been challenged by datasets with large-scale, ...
research
12/26/2017

Zero-Shot Learning via Latent Space Encoding

Zero-Shot Learning (ZSL) is typically achieved by resorting to a class s...
research
07/16/2018

Manifold Adversarial Learning

The recently proposed adversarial training methods show the robustness t...
research
07/03/2020

Collaborative Learning for Faster StyleGAN Embedding

The latent code of the recent popular model StyleGAN has learned disenta...
research
05/28/2018

Keep and Learn: Continual Learning by Constraining the Latent Space for Knowledge Preservation in Neural Networks

Data is one of the most important factors in machine learning. However, ...
research
04/25/2022

OCFormer: One-Class Transformer Network for Image Classification

We propose a novel deep learning framework based on Vision Transformers ...

Please sign up or login with your details

Forgot password? Click here to reset