Energy-Efficient Runtime Adaptable L1 STT-RAM Cache Design

04/19/2019
by   Kyle Kuan, et al.
0

Much research has shown that applications have variable runtime cache requirements. In the context of the increasingly popular Spin-Transfer Torque RAM (STT-RAM) cache, the retention time, which defines how long the cache can retain a cache block in the absence of power, is one of the most important cache requirements that may vary for different applications. In this paper, we propose a Logically Adaptable Retention Time STT-RAM (LARS) cache that allows the retention time to be dynamically adapted to applications' runtime requirements. LARS cache comprises of multiple STT-RAM units with different retention times, with only one unit being used at a given time. LARS dynamically determines which STT-RAM unit to use during runtime, based on executing applications' needs. As an integral part of LARS, we also explore different algorithms to dynamically determine the best retention time based on different cache design tradeoffs. Our experiments show that by adapting the retention time to different applications' requirements, LARS cache can reduce the average cache energy by 25.31 overheads.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2019

HALLS: An Energy-Efficient Highly Adaptable Last Level STT-RAM Cache for Multicore Systems

Spin-Transfer Torque RAM (STT-RAM) is widely considered a promising alte...
research
02/18/2021

Effective Cache Apportioning for Performance Isolation Under Compiler Guidance

With a growing number of cores per socket in modern data-centers where m...
research
08/08/2019

Energy and Performance Analysis of STTRAM Caches for Mobile Applications

Spin-Transfer Torque RAMs (STTRAMs) have been shown to offer much promis...
research
01/14/2017

HoLiSwap: Reducing Wire Energy in L1 Caches

This paper describes HoLiSwap a method to reduce L1 cache wire energy, a...
research
03/06/2023

Optimizing L1 cache for embedded systems through grammatical evolution

Nowadays, embedded systems are provided with cache memories that are lar...
research
10/22/2018

ShareJIT: JIT Code Cache Sharing across Processes and Its Practical Implementation

Just-in-time (JIT) compilation coupled with code caching are widely used...
research
08/31/2023

Charliecloud's layer-free, Git-based container build cache

A popular approach to deploying scientific applications in high performa...

Please sign up or login with your details

Forgot password? Click here to reset