Diagonal Memory Optimisation for Machine Learning on Micro-controllers

10/04/2020
by   Peter Blacker, et al.
0

As machine learning spreads into more and more application areas, micro controllers and low power CPUs are increasingly being used to perform inference with machine learning models. The capability to deploy onto these limited hardware targets is enabling machine learning models to be used across a diverse range of new domains. Optimising the inference process on these targets poses different challenges from either desktop CPU or GPU implementations, where the small amounts of RAM available on these targets sets limits on size of models which can be executed. Analysis of the memory use patterns of eleven machine learning models was performed. Memory load and store patterns were observed using a modified version of the Valgrind debugging tool, identifying memory areas holding values necessary for the calculation as inference progressed. These analyses identified opportunities optimise the memory use of these models by overlapping the input and output buffers of individual tensor operations. Three methods are presented which can calculate the safe overlap of input and output buffers for tensor operations. Ranging from a computationally expensive approach with the ability to operate on compiled layer operations, to a versatile analytical solution which requires access to the original source code of the layer. The diagonal memory optimisation technique is described and shown to achieve memory savings of up to 34.5 models. Micro-controller targets are identified where it is only possible to deploy some models if diagonal memory optimisation is used.

READ FULL TEXT
07/14/2020

Serverless inferencing on Kubernetes

Organisations are increasingly putting machine learning models into prod...
10/17/2020

TensorFlow Lite Micro: Embedded Machine Learning on TinyML Systems

Deep learning inference on embedded devices is a burgeoning field with m...
01/16/2017

Deep Memory Networks for Attitude Identification

We consider the task of identifying attitudes towards a given set of ent...
04/15/2020

Improving Input-Output Linearizing Controllers for Bipedal Robots via Reinforcement Learning

The main drawbacks of input-output linearizing controllers are the need ...
09/18/2021

Reconfigurable Low-latency Memory System for Sparse Matricized Tensor Times Khatri-Rao Product on FPGA

Tensor decomposition has become an essential tool in many applications i...
10/09/2020

A Tensor Compiler for Unified Machine Learning Prediction Serving

Machine Learning (ML) adoption in the enterprise requires simpler and mo...
10/12/2021

A General Summarization Matrix for Scalable Machine Learning Model Computation in the R Language

Data analysis is an essential task for research. Modern large datasets i...