Classification of Astronomical Bodies by Efficient Layer Fine-Tuning of Deep Neural Networks

05/14/2022
by   Sabeesh Ethiraj, et al.
0

The SDSS-IV dataset contains information about various astronomical bodies such as Galaxies, Stars, and Quasars captured by observatories. Inspired by our work on deep multimodal learning, which utilized transfer learning to classify the SDSS-IV dataset, we further extended our research in the fine tuning of these architectures to study the effect in the classification scenario. Architectures such as Resnet-50, DenseNet-121 VGG-16, Xception, EfficientNetB2, MobileNetV2 and NasnetMobile have been built using layer wise fine tuning at different levels. Our findings suggest that freezing all layers with Imagenet weights and adding a final trainable layer may not be the optimal solution. Further, baseline models and models that have higher number of trainable layers performed similarly in certain architectures. Model need to be fine tuned at different levels and a specific training ratio is required for a model to be termed ideal. Different architectures had different responses to the change in the number of trainable layers w.r.t accuracies. While models such as DenseNet-121, Xception, EfficientNetB2 achieved peak accuracies that were relatively consistent with near perfect training curves, models such as Resnet-50,VGG-16, MobileNetV2 and NasnetMobile had lower, delayed peak accuracies with poorly fitting training curves. It was also found that though mobile neural networks have lesser parameters and model size, they may not always be ideal for deployment on a low computational device as they had consistently lower validation accuracies. Customized evaluation metrics such as Tuning Parameter Ratio and Tuning Layer Ratio are used for model evaluation.

READ FULL TEXT

page 1

page 2

research
05/22/2022

Classification of Quasars, Galaxies, and Stars in the Mapping of the Universe Multi-modal Deep Learning

In this paper, the fourth version the Sloan Digital Sky Survey (SDSS-4),...
research
01/04/2023

Parameter-Efficient Fine-Tuning Design Spaces

Parameter-efficient fine-tuning aims to achieve performance comparable t...
research
05/14/2022

Revisiting Facial Key Point Detection: An Efficient Approach Using Deep Neural Networks

Facial landmark detection is a widely researched field of deep learning ...
research
02/10/2021

Partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning

Transfer learning from ImageNet is the go-to approach when applying deep...
research
04/13/2022

Sapinet: A sparse event-based spatiotemporal oscillator for learning in the wild

We introduce Sapinet – a spike timing (event)-based multilayer neural ne...
research
10/21/2022

On-Device Model Fine-Tuning with Label Correction in Recommender Systems

To meet the practical requirements of low latency, low cost, and good pr...
research
11/26/2021

Approximate Bayesian Computation for Physical Inverse Modeling

Semiconductor device models are essential to understand the charge trans...

Please sign up or login with your details

Forgot password? Click here to reset