Accelerating Transfer Learning with Near-Data Computation on Cloud Object Stores

10/16/2022
by   Arsany Guirguis, et al.
11

Near-data computation techniques have been successfully deployed to mitigate the cloud network bottleneck between the storage and compute tiers. At Huawei, we are currently looking to get more value from these techniques by broadening their applicability. Machine learning (ML) applications are an appealing and timely target. This paper describes our experience applying near-data computation techniques to transfer learning (TL), a widely popular ML technique, in the context of disaggregated cloud object stores. Our techniques benefit both cloud providers and users. They improve our operational efficiency while providing users the performance improvements they demand from us. The main practical challenge to consider is that the storage-side computational resources are limited. Our approach is to split the TL deep neural network (DNN) during the feature extraction phase, before the training phase. This reduces the network transfers to the compute tier and further decouples the batch size of feature extraction from the training batch size. This facilitates our second technique, storage-side batch adaptation, which enables increased concurrency in the storage tier while avoiding out-of-memory errors. Guided by these insights, we present HAPI, our processing system for TL that spans the compute and storage tiers while remaining transparent to the user. Our evaluation with several state-of-the-art DNNs, such as ResNet, VGG, and Transformer, shows up to 11x improvement in application runtime and up to 8.3x reduction in the data transferred from the storage to the compute tier compared to running the computation entirely in the compute tier.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2023

Guided Transfer Learning

Machine learning requires exuberant amounts of data and computation. Als...
research
12/07/2020

Cost-effective Machine Learning Inference Offload for Edge Computing

Computing at the edge is increasingly important since a massive amount o...
research
08/04/2020

A Case For Adaptive Deep Neural Networks in Edge Computing

Edge computing offers an additional layer of compute infrastructure clos...
research
05/06/2020

AIOps for a Cloud Object Storage Service

With the growing reliance on the ubiquitous availability of IT systems a...
research
11/08/2019

Feature discriminativity estimation in CNNs for transfer learning

The purpose of feature extraction on convolutional neural networks is to...
research
02/22/2018

Understanding the Performance of Ceph Block Storage for Hyper-Converged Cloud with All Flash Storage

Hyper-converged cloud refers to an architecture that an operator runs co...
research
06/12/2022

Learning-Based Data Storage [Vision] (Technical Report)

Deep neural network (DNN) and its variants have been extensively used fo...

Please sign up or login with your details

Forgot password? Click here to reset