Reducing Artificial Neural Network Complexity: A Case Study on Exoplanet Detection

02/27/2019
by   Sebastiaan Koning, et al.
0

Despite their successes in the field of self-learning AI, Convolutional Neural Networks (CNNs) suffer from having too many trainable parameters, impacting computational performance. Several approaches have been proposed to reduce the number of parameters in the visual domain, the Inception architecture [Szegedy et al., 2016] being a prominent example. This raises the question whether the number of trainable parameters in CNNs can also be reduced for 1D inputs, such as time-series data, without incurring a substantial loss in classification performance. We propose and examine two methods for complexity reduction in AstroNet [Shallue & Vanderburg, 2018], a CNN for automatic classification of time-varying brightness data of stars to detect exoplanets. The first method makes only a tactical reduction of layers in AstroNet while the second method also modifies the original input data by means of a Gaussian pyramid. We conducted our experiments with various degrees of dropout regularization. Our results show only a non-substantial loss in accuracy compared to the original AstroNet, while reducing training time up to 85 percent. These results show potential for similar reductions in other CNN applications while largely retaining accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2019

Associative Convolutional Layers

Motivated by the necessity for parameter efficiency in distributed machi...
research
02/09/2023

Complex Network for Complex Problems: A comparative study of CNN and Complex-valued CNN

Neural networks, especially convolutional neural networks (CNN), are one...
research
05/20/2018

Low-Cost Parameterizations of Deep Convolutional Neural Networks

Convolutional Neural Networks (CNNs) filter the input data using a serie...
research
12/07/2017

Take it in your stride: Do we need striding in CNNs?

Since their inception, CNNs have utilized some type of striding operator...
research
09/06/2018

ProdSumNet: reducing model parameters in deep neural networks via product-of-sums matrix decompositions

We consider a general framework for reducing the number of trainable mod...
research
04/01/2021

Less is More: Accelerating Faster Neural Networks Straight from JPEG

Most image data available are often stored in a compressed format, from ...
research
08/03/2022

Maintaining Performance with Less Data

We propose a novel method for training a neural network for image classi...

Please sign up or login with your details

Forgot password? Click here to reset