Shared Mobile-Cloud Inference for Collaborative Intelligence

02/01/2020
by   Mateen Ulhaq, et al.
0

As AI applications for mobile devices become more prevalent, there is an increasing need for faster execution and lower energy consumption for neural model inference. Historically, the models run on mobile devices have been smaller and simpler in comparison to large state-of-the-art research models, which can only run on the cloud. However, cloud-only inference has drawbacks such as increased network bandwidth consumption and higher latency. In addition, cloud-only inference requires the input data (images, audio) to be fully transferred to the cloud, creating concerns about potential privacy breaches. We demonstrate an alternative approach: shared mobile-cloud inference. Partial inference is performed on the mobile in order to reduce the dimensionality of the input data and arrive at a compact feature tensor, which is a latent space representation of the input signal. The feature tensor is then transmitted to the server for further inference. This strategy can improve inference latency, energy consumption, and network bandwidth usage, as well as provide privacy protection, because the original signal never leaves the mobile. Further performance gain can be achieved by compressing the feature tensor before its transmission.

READ FULL TEXT
research
06/24/2023

Mobile-Cloud Inference for Collaborative Intelligence

As AI applications for mobile devices become more prevalent, there is an...
research
02/01/2019

Towards Collaborative Intelligence Friendly Architectures for Deep Learning

Modern mobile devices are equipped with high-performance hardware resour...
research
02/14/2019

Multi-task learning with compressible features for Collaborative Intelligence

A promising way to deploy Artificial Intelligence (AI)-based services on...
research
11/12/2022

PriMask: Cascadable and Collusion-Resilient Data Masking for Mobile Cloud Inference

Mobile cloud offloading is indispensable for inference tasks based on la...
research
11/02/2018

Progress and Tradeoffs in Neural Language Models

In recent years, we have witnessed a dramatic shift towards techniques d...
research
01/14/2020

Run-time Deep Model Multiplexing

We propose a framework to design a light-weight neural multiplexer that ...
research
01/12/2021

Panorama: A Framework to Support Collaborative Context Monitoring on Co-Located Mobile Devices

A key challenge in wide adoption of sophisticated context-aware applicat...

Please sign up or login with your details

Forgot password? Click here to reset