Effective, Fast, and Memory-Efficient Compressed Multi-function Convolutional Neural Networks for More Accurate Medical Image Classification

11/29/2018
by   Luna M. Zhang, et al.
0

Convolutional Neural Networks (CNNs) usually use the same activation function, such as RELU, for all convolutional layers. There are performance limitations of just using RELU. In order to achieve better classification performance, reduce training and testing times, and reduce power consumption and memory usage, a new "Compressed Multi-function CNN" is developed. Google's Inception-V4, for example, is a very deep CNN that consists of 4 Inception-A blocks, 7 Inception-B blocks, and 3 Inception-C blocks. RELU is used for all convolutional layers. A new "Compressed Multi-function Inception-V4" (CMI) that can use different activation functions is created with k Inception-A blocks, m Inception-B blocks, and n Inception-C blocks where k in 1, 2, 3, 4, m in 1, 2, 3, 4, 5, 6, 7, n in 1, 2, 3, and (k+m+n)<14. For performance analysis, a dataset for classifying brain MRI images into one of the four stages of Alzheimer's disease is used to compare three CMI architectures with Inception-V4 in terms of F1-score, training and testing times (related to power consumption), and memory usage (model size). Overall, simulations show that the new CMI models can outperform both the commonly used Inception-V4 and Inception-V4 using different activation functions. In the future, other "Compressed Multi-function CNNs", such as "Compressed Multi-function ResNets and DenseNets" that have a reduced number of convolutional blocks using different activation functions, will be developed to further increase classification accuracy, reduce training and testing times, reduce computational power, and reduce memory usage (model size) for building more effective healthcare systems, such as implementing accurate and convenient disease diagnosis systems on mobile devices that have limited battery power and memory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2018

Multi-function Convolutional Neural Networks for Improving Image Classification Performance

Traditional Convolutional Neural Networks (CNNs) typically use the same ...
research
10/15/2020

QReLU and m-QReLU: Two novel quantum activation functions to aid medical diagnostics

The ReLU activation function (AF) has been extensively applied in deep n...
research
05/28/2023

ASU-CNN: An Efficient Deep Architecture for Image Classification and Feature Visualizations

Activation functions play a decisive role in determining the capacity of...
research
03/29/2021

Comparison of different convolutional neural network activation functions and methods for building ensembles

Recently, much attention has been devoted to finding highly efficient an...
research
12/10/2018

Accelerating Convolutional Neural Networks via Activation Map Compression

The deep learning revolution brought us an extensive array of neural net...
research
08/24/2019

Plexus Convolutional Neural Network (PlexusNet): A novel neural network architecture for histologic image analysis

Different convolutional neural network (CNN) models have been tested for...

Please sign up or login with your details

Forgot password? Click here to reset