Sparks of GPTs in Edge Intelligence for Metaverse: Caching and Inference for Mobile AIGC Services

04/18/2023
by   Minrui Xu, et al.
0

Aiming at achieving artificial general intelligence (AGI) for Metaverse, pretrained foundation models (PFMs), e.g., generative pretrained transformers (GPTs), can effectively provide various AI services, such as autonomous driving, digital twins, and AI-generated content (AIGC) for extended reality. With the advantages of low latency and privacy-preserving, serving PFMs of mobile AI services in edge intelligence is a viable solution for caching and executing PFMs on edge servers with limited computing resources and GPU memory. However, PFMs typically consist of billions of parameters that are computation and memory-intensive for edge servers during loading and execution. In this article, we investigate edge PFM serving problems for mobile AIGC services of Metaverse. First, we introduce the fundamentals of PFMs and discuss their characteristic fine-tuning and inference methods in edge intelligence. Then, we propose a novel framework of joint model caching and inference for managing models and allocating resources to satisfy users' requests efficiently. Furthermore, considering the in-context learning ability of PFMs, we propose a new metric to evaluate the freshness and relevance between examples in demonstrations and executing tasks, namely the Age of Context (AoC). Finally, we propose a least context algorithm for managing cached models at edge servers by balancing the tradeoff among latency, energy consumption, and accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2023

Joint Foundation Model Caching and Inference of Generative AI Services for Edge Intelligence

With the rapid development of artificial general intelligence (AGI), var...
research
01/25/2019

Joint Service Placement and Request Routing in Multi-cell Mobile Edge Computing Networks

The proliferation of innovative mobile services such as augmented realit...
research
10/07/2021

Privacy-Preserving Coded Mobile Edge Computing for Low-Latency Distributed Inference

We consider a mobile edge computing scenario where a number of devices w...
research
06/01/2023

Integrated Sensing-Communication-Computation for Edge Artificial Intelligence

Edge artificial intelligence (AI) has been a promising solution towards ...
research
08/28/2018

Fog-enabled Edge Learning for Cognitive Content-Centric Networking in 5G

By caching content at network edges close to the users, the content-cent...
research
09/03/2023

Optimizing Mobile-Edge AI-Generated Everything (AIGX) Services by Prompt Engineering: Fundamental, Framework, and Case Study

As the next-generation paradigm for content creation, AI-Generated Conte...
research
11/17/2019

Caching Techniques to Improve Latency in Serverless Architectures

Serverless computing has gained a significant traction in recent times b...

Please sign up or login with your details

Forgot password? Click here to reset