Make (Nearly) Every Neural Network Better: Generating Neural Network Ensembles by Weight Parameter Resampling

07/02/2018
by   Jiayi Liu, et al.
0

Deep Neural Networks (DNNs) have become increasingly popular in computer vision, natural language processing, and other areas. However, training and fine-tuning a deep learning model is computationally intensive and time-consuming. We propose a new method to improve the performance of nearly every model including pre-trained models. The proposed method uses an ensemble approach where the networks in the ensemble are constructed by reassigning model parameter values based on the probabilistic distribution of these parameters, calculated towards the end of the training process. For pre-trained models, this approach results in an additional training step (usually less than one epoch). We perform a variety of analysis using the MNIST dataset and validate the approach with a number of DNN models using pre-trained models on the ImageNet dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/06/2018

Digital Watermarking for Deep Neural Networks

Although deep neural networks have made tremendous progress in the area ...
research
08/25/2022

Deep Learning-based approaches for automatic detection of shell nouns and evaluation on WikiText-2

In some areas, such as Cognitive Linguistics, researchers are still usin...
research
12/15/2022

Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks

The increasing importance of both deep neural networks (DNNs) and cloud ...
research
09/01/2021

Towards Learning a Vocabulary of Visual Concepts and Operators using Deep Neural Networks

Deep neural networks have become the default choice for many application...
research
02/12/2023

Sparse Mutation Decompositions: Fine Tuning Deep Neural Networks with Subspace Evolution

Neuroevolution is a promising area of research that combines evolutionar...
research
09/29/2022

FastPacket: Towards Pre-trained Packets Embedding based on FastText for next-generation NIDS

New Attacks are increasingly used by attackers everyday but many of them...
research
01/15/2017

Embedding Watermarks into Deep Neural Networks

Deep neural networks have recently achieved significant progress. Sharin...

Please sign up or login with your details

Forgot password? Click here to reset