On Optimal Caching and Model Multiplexing for Large Model Inference

06/03/2023
by   Banghua Zhu, et al.
0

Large Language Models (LLMs) and other large foundation models have achieved noteworthy success, but their size exacerbates existing resource consumption and latency challenges. In particular, the large-scale deployment of these models is hindered by the significant resource requirements during inference. In this paper, we study two approaches for mitigating these challenges: employing a cache to store previous queries and learning a model multiplexer to choose from an ensemble of models for query processing. Theoretically, we provide an optimal algorithm for jointly optimizing both approaches to reduce the inference cost in both offline and online tabular settings. By combining a caching algorithm, namely Greedy Dual Size with Frequency (GDSF) or Least Expected Cost (LEC), with a model multiplexer, we achieve optimal rates in both offline and online settings. Empirically, simulations show that the combination of our caching and model multiplexing algorithms greatly improves over the baselines, with up to 50× improvement over the baseline when the ratio between the maximum cost and minimum cost is 100. Experiments on real datasets show a 4.3× improvement in FLOPs over the baseline when the ratio for FLOPs is 10, and a 1.8× improvement in latency when the ratio for average latency is 1.85.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2020

Lower Bounds for Caching with Delayed Hits

Caches are a fundamental component of latency-sensitive computer systems...
research
12/24/2019

RetroRenting: An Online Policy for Service Caching at the Edge

The rapid proliferation of shared edge computing platforms has enabled a...
research
09/07/2023

Keep-Alive Caching for the Hawkes process

We study the design of caching policies in applications such as serverle...
research
07/13/2022

Caching with Reserves

Caching is a crucial component of many computer systems, so naturally it...
research
11/03/2020

Beyond Worst-case Analysis of Multicore Caching Strategies

Every processor with multiple cores sharing a cache needs to implement a...
research
02/07/2020

Accelerating Deep Learning Inference via Freezing

Over the last few years, Deep Neural Networks (DNNs) have become ubiquit...
research
11/10/2017

Practical Bounds on Optimal Caching with Variable Object Sizes

Many recent caching systems aim to improve hit ratios, but there is no g...

Please sign up or login with your details

Forgot password? Click here to reset