Weight Freezing: A Regularization Approach for Fully Connected Layers with an Application in EEG Classification

06/09/2023
by   Zhengqing Miao, et al.
0

In the realm of EEG decoding, enhancing the performance of artificial neural networks (ANNs) carries significant potential. This study introduces a novel approach, termed "weight freezing", that is anchored on the principles of ANN regularization and neuroscience prior knowledge. The concept of weight freezing revolves around the idea of reducing certain neurons' influence on the decision-making process for a specific EEG task by freezing specific weights in the fully connected layer during the backpropagation process. This is actualized through the use of a mask matrix and a threshold to determine the proportion of weights to be frozen during backpropagation. Moreover, by setting the masked weights to zero, weight freezing can not only realize sparse connections in networks with a fully connected layer as the classifier but also function as an efficacious regularization method for fully connected layers. Through experiments involving three distinct ANN architectures and three widely recognized EEG datasets, we validate the potency of weight freezing. Our method significantly surpasses previous peak performances in classification accuracy across all examined datasets. Supplementary control experiments offer insights into performance differences pre and post weight freezing implementation and scrutinize the influence of the threshold in the weight freezing process. Our study underscores the superior efficacy of weight freezing compared to traditional fully connected networks for EEG feature classification tasks. With its proven effectiveness, this innovative approach holds substantial promise for contributing to future strides in EEG decoding research.

READ FULL TEXT
research
07/25/2019

DropAttention: A Regularization Method for Fully-Connected Self-Attention Networks

Variants dropout methods have been designed for the fully-connected laye...
research
07/01/2017

Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations

Deep neural networks (DNNs) usually demand a large amount of operations ...
research
07/15/2017

Evolutionary Training of Sparse Artificial Neural Networks: A Network Science Perspective

Through the success of deep learning, Artificial Neural Networks (ANNs) ...
research
09/23/2020

Schizophrenia-mimicking layers outperform conventional neural network layers

We have reported nanometer-scale three-dimensional studies of brain netw...
research
04/08/2019

Hierarchical Deep Feature Learning For Decoding Imagined Speech From EEG

We propose a mixed deep neural network strategy, incorporating parallel ...
research
06/08/2021

Householder-Absolute Neural Layers For High Variability and Deep Trainability

We propose a new architecture for artificial neural networks called Hous...
research
05/17/2022

Perturbation of Deep Autoencoder Weights for Model Compression and Classification of Tabular Data

Fully connected deep neural networks (DNN) often include redundant weigh...

Please sign up or login with your details

Forgot password? Click here to reset