Regularization Through Simultaneous Learning: A Case Study for Hop Classification

Overfitting remains a prevalent challenge in deep neural networks, leading to suboptimal real-world performance. Employing regularization techniques is a common strategy to counter this challenge, improving model generalization. This paper proposes Simultaneous Learning, a novel regularization approach drawing on Transfer Learning and Multi-task Learning principles, applied specifically to the classification of hop varieties - an integral component of beer production. Our approach harnesses the power of auxiliary datasets in synergy with the target dataset to amplify the acquisition of highly relevant features. Through a strategic modification of the model's final layer, we enable the simultaneous classification of both datasets without the necessity to treat them as disparate tasks. To realize this, we formulate a loss function that includes an inter-group penalty. We conducted experimental evaluations using the InceptionV3 and ResNet50 models, designating the UFOP-HVD hop leaf dataset as the target and ImageNet and PlantNet as auxiliary datasets. Our proposed method exhibited a substantial performance advantage over models without regularization and those adopting dropout regularization, with accuracy improvements ranging from 5 to 22 percentage points. Additionally, we introduce a technique for interpretability devised to assess the quality of features by analyzing correlations among class features in the network's convolutional layers.

READ FULL TEXT

page 20

page 21

page 28

page 29

page 30

page 40

page 41

research
06/07/2023

Sample-Level Weighting for Multi-Task Learning with Auxiliary Tasks

Multi-task learning (MTL) can improve the generalization performance of ...
research
07/06/2018

Multi-Task Learning with Incomplete Data for Healthcare

Multi-task learning is a type of transfer learning that trains multiple ...
research
03/29/2021

FocusedDropout for Convolutional Neural Network

In convolutional neural network (CNN), dropout cannot work well because ...
research
06/27/2022

Guillotine Regularization: Improving Deep Networks Generalization by Removing their Head

One unexpected technique that emerged in recent years consists in traini...
research
08/23/2017

Exploiting Convolution Filter Patterns for Transfer Learning

In this paper, we introduce a new regularization technique for transfer ...
research
04/27/2023

UCF: Uncovering Common Features for Generalizable Deepfake Detection

Deepfake detection remains a challenging task due to the difficulty of g...
research
03/09/2023

Aux-Drop: Handling Haphazard Inputs in Online Learning Using Auxiliary Dropouts

Many real-world applications based on online learning produce streaming ...

Please sign up or login with your details

Forgot password? Click here to reset