Model Compression

05/20/2021
by   Arhum Ishtiaq, et al.
10

With time, machine learning models have increased in their scope, functionality and size. Consequently, the increased functionality and size of such models requires high-end hardware to both train and provide inference after the fact. This paper aims to explore the possibilities within the domain of model compression and discuss the efficiency of each of the possible approaches while comparing model size and performance with respect to pre- and post-compression.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset