SMaLL: A Software Framework for portable Machine Learning Libraries

03/08/2023
by   Upasana Sridhar, et al.
0

Interest in deploying Deep Neural Network (DNN) inference on edge devices has resulted in an explosion of the number and types of hardware platforms to use. While the high-level programming interface, such as TensorFlow, can be readily ported across different devices, high-performance inference implementations rely on a good mapping of the high-level interface to the target hardware platform. Commonly, this mapping may use optimizing compilers to generate code at compile time or high-performance vendor libraries that have been specialized to the target platform. Both approaches rely on expert knowledge to produce the mapping, which may be time-consuming and difficult to extend to new architectures. In this work, we present a DNN library framework, SMaLL, that is easily extensible to new architectures. The framework uses a unified loop structure and shared, cache-friendly data format across all intermediate layers, eliminating the time and memory overheads incurred by data transformation between layers. Layers are implemented by simply specifying the layer's dimensions and a kernel – the key computing operations of each layer. The unified loop structure and kernel abstraction allows us to reuse code across layers and computing platforms. New architectures only require the 100s of lines in the kernel to be redesigned. To show the benefits of our approach, we have developed software that supports a range of layer types and computing platforms, which is easily extensible for rapidly instantiating high performance DNN libraries. We evaluate our software by instantiating networks from the TinyMLPerf benchmark suite on 5 ARM platforms and 1 x86 platform ( an AMD Zen 2). Our framework shows end-to-end performance that is comparable to or better than ML Frameworks such as TensorFlow, TVM and LibTorch.

READ FULL TEXT

page 4

page 10

page 12

research
11/21/2016

A Metaprogramming and Autotuning Framework for Deploying Deep Learning Applications

In recent years, deep neural networks (DNNs), have yielded strong result...
research
04/08/2019

Accelerated Neural Networks on OpenCL Devices Using SYCL-DNN

Over the past few years machine learning has seen a renewed explosion of...
research
05/15/2023

Dragon-Alpha cu32: A Java-based Tensor Computing Framework With its High-Performance CUDA Library

Java is very powerful, but in Deep Learning field, its capabilities prob...
research
04/25/2023

Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures

During the past decade, Deep Learning (DL) algorithms, programming syste...
research
08/04/2021

Two-Chains: High Performance Framework for Function Injection and Execution

Some important problems, such as semantic graph analysis, require large-...
research
12/11/2019

Array Languages Make Neural Networks Fast

Modern machine learning frameworks are complex: they are typically organ...
research
04/10/2023

Deploying Machine Learning Models to Ahead-of-Time Runtime on Edge Using MicroTVM

In the past few years, more and more AI applications have been applied t...

Please sign up or login with your details

Forgot password? Click here to reset