A priori compression of convolutional neural networks for wave simulators

04/11/2023
by   Hamza Boukraichi, et al.
0

Convolutional neural networks are now seeing widespread use in a variety of fields, including image classification, facial and object recognition, medical imaging analysis, and many more. In addition, there are applications such as physics-informed simulators in which accurate forecasts in real time with a minimal lag are required. The present neural network designs include millions of parameters, which makes it difficult to install such complex models on devices that have limited memory. Compression techniques might be able to resolve these issues by decreasing the size of CNN models that are created by reducing the number of parameters that contribute to the complexity of the models. We propose a compressed tensor format of convolutional layer, a priori, before the training of the neural network. 3-way kernels or 2-way kernels in convolutional layers are replaced by one-way fiters. The overfitting phenomena will be reduced also. The time needed to make predictions or time required for training using the original Convolutional Neural Networks model would be cut significantly if there were fewer parameters to deal with. In this paper we present a method of a priori compressing convolutional neural networks for finite element (FE) predictions of physical data. Afterwards we validate our a priori compressed models on physical data from a FE model solving a 2D wave equation. We show that the proposed convolutinal compression technique achieves equivalent performance as classical convolutional layers with fewer trainable parameters and lower memory footprint.

READ FULL TEXT
research
12/22/2014

Deep Fried Convnets

The fully connected layers of a deep convolutional neural network typica...
research
12/08/2019

Lossless Compression for 3DCNNs Based on Tensor Train Decomposition

Three dimensional convolutional neural networks (3DCNNs) have been appli...
research
10/23/2022

Drastically Reducing the Number of Trainable Parameters in Deep CNNs by Inter-layer Kernel-sharing

Deep convolutional neural networks (DCNNs) have become the state-of-the-...
research
05/08/2019

Generalized Dilation Neural Networks

Vanilla convolutional neural networks are known to provide superior perf...
research
02/21/2019

Jointly Sparse Convolutional Neural Networks in Dual Spatial-Winograd Domains

We consider the optimization of deep convolutional neural networks (CNNs...
research
02/23/2021

Arguments for the Unsuitability of Convolutional Neural Networks for Non–Local Tasks

Convolutional neural networks have established themselves over the past ...
research
07/04/2022

TT-PINN: A Tensor-Compressed Neural PDE Solver for Edge Computing

Physics-informed neural networks (PINNs) have been increasingly employed...

Please sign up or login with your details

Forgot password? Click here to reset