Deep Compression of Neural Networks for Fault Detection on Tennessee Eastman Chemical Processes

01/18/2021
by   Mingxuan Li, et al.
0

Artificial neural network has achieved the state-of-art performance in fault detection on the Tennessee Eastman process, but it often requires enormous memory to fund its massive parameters. In order to implement online real-time fault detection, three deep compression techniques (pruning, clustering, and quantization) are applied to reduce the computational burden. We have extensively studied 7 different combinations of compression techniques, all methods achieve high model compression rates over 64 detection accuracy. The best result is applying all three techniques, which reduces the model sizes by 91.5 result leads to a smaller storage requirement in production environments, and makes the deployment smoother in real world.

READ FULL TEXT

page 3

page 8

research
07/15/2020

Compression strategies and space-conscious representations for deep neural networks

Recent advances in deep learning have made available large, powerful con...
research
04/17/2020

Quantization Guided JPEG Artifact Correction

The JPEG image compression algorithm is the most popular method of image...
research
07/03/2023

Internet of Things Fault Detection and Classification via Multitask Learning

This paper presents a comprehensive investigation into developing a faul...
research
08/28/2021

Compact representations of convolutional neural networks via weight pruning and quantization

The state-of-the-art performance for several real-world problems is curr...
research
03/05/2021

Pufferfish: Communication-efficient Models At No Extra Cost

To mitigate communication overheads in distributed model training, sever...
research
10/20/2022

Graph Neural Networks with Trainable Adjacency Matrices for Fault Diagnosis on Multivariate Sensor Data

Timely detected anomalies in the chemical technological processes, as we...
research
10/06/2020

Characterising Bias in Compressed Models

The popularity and widespread use of pruning and quantization is driven ...

Please sign up or login with your details

Forgot password? Click here to reset