Competitive Learning Enriches Learning Representation and Accelerates the Fine-tuning of CNNs

04/26/2018
by   Takashi Shinozaki, et al.
0

In this study, we propose the integration of competitive learning into convolutional neural networks (CNNs) to improve the representation learning and efficiency of fine-tuning. Conventional CNNs use back propagation learning, and it enables powerful representation learning by a discrimination task. However, it requires huge amount of labeled data, and acquisition of labeled data is much harder than that of unlabeled data. Thus, efficient use of unlabeled data is getting crucial for DNNs. To address the problem, we introduce unsupervised competitive learning into the convolutional layer, and utilize unlabeled data for effective representation learning. The results of validation experiments using a toy model demonstrated that strong representation learning effectively extracted bases of images into convolutional filters using unlabeled data, and accelerated the speed of the fine-tuning of subsequent supervised back propagation learning. The leverage was more apparent when the number of filters was sufficiently large, and, in such a case, the error rate steeply decreased in the initial phase of fine-tuning. Thus, the proposed method enlarged the number of filters in CNNs, and enabled a more detailed and generalized representation. It could provide a possibility of not only deep but broad neural networks.

READ FULL TEXT
research
01/04/2020

Biologically-Motivated Deep Learning Method using Hierarchical Competitive Learning

This study proposes a novel biologically-motivated learning method for d...
research
12/11/2020

DeCoAR 2.0: Deep Contextualized Acoustic Representations with Vector Quantization

Recent success in speech representation learning enables a new way to le...
research
02/25/2021

Self-Tuning for Data-Efficient Deep Learning

Deep learning has made revolutionary advances to diverse applications in...
research
05/24/2023

An Unsupervised Method for Estimating Class Separability of Datasets with Application to LLMs Fine-Tuning

This paper proposes an unsupervised method that leverages topological ch...
research
05/02/2017

Error Corrective Boosting for Learning Fully Convolutional Networks with Limited Data

Training deep fully convolutional neural networks (F-CNNs) for semantic ...
research
11/17/2019

REFIT: a Unified Watermark Removal Framework for Deep Learning Systems with Limited Data

Deep neural networks (DNNs) have achieved tremendous success in various ...
research
06/27/2020

PCLNet: A Practical Way for Unsupervised Deep PolSAR Representations and Few-Shot Classification

Deep learning and convolutional neural networks (CNNs) have made progres...

Please sign up or login with your details

Forgot password? Click here to reset