MUTE: Data-Similarity Driven Multi-hot Target Encoding for Neural Network Design

10/15/2019
by   Mayoore S. Jaiswal, et al.
0

Target encoding is an effective technique to deliver better performance for conventional machine learning methods, and recently, for deep neural networks as well. However, the existing target encoding approaches require significant increase in the learning capacity, thus demand higher computation power and more training data. In this paper, we present a novel and efficient target encoding scheme, MUTE to improve both generalizability and robustness of a target model by understanding the inter-class characteristics of a target dataset. By extracting the confusion level between the target classes in a dataset, MUTE strategically optimizes the Hamming distances among target encoding. Such optimized target encoding offers higher classification strength for neural network models with negligible computation overhead and without increasing the model size. When MUTE is applied to the popular image classification networks and datasets, our experimental results show that MUTE offers better generalization and defense against the noises and adversarial attacks over the existing solutions.

READ FULL TEXT

page 6

page 7

research
06/05/2019

Multi-way Encoding for Robustness

Deep models are state-of-the-art for many computer vision tasks includin...
research
05/01/2020

Defense of Word-level Adversarial Attacks via Random Substitution Encoding

The adversarial attacks against deep neural networks on computer version...
research
06/01/2020

Sampling Techniques in Bayesian Target Encoding

Target encoding is an effective encoding technique of categorical variab...
research
09/19/2017

Verifying Properties of Binarized Deep Neural Networks

Understanding properties of deep neural networks is an important challen...
research
06/04/2021

NeuraCrypt: Hiding Private Health Data via Random Neural Networks for Public Training

Balancing the needs of data privacy and predictive utility is a central ...
research
04/14/2023

Phantom Embeddings: Using Embedding Space for Model Regularization in Deep Neural Networks

The strength of machine learning models stems from their ability to lear...
research
04/15/2021

Geometry encoding for numerical simulations

We present a notion of geometry encoding suitable for machine learning-b...

Please sign up or login with your details

Forgot password? Click here to reset