Touché: Towards Ideal and Efficient Cache Compression By Mitigating Tag Area Overheads

09/02/2019
by   Seokin Hong, et al.
0

Compression is seen as a simple technique to increase the effective cache capacity. Unfortunately, compression techniques either incur tag area overheads or restrict data placement to only include neighboring compressed cache blocks to mitigate tag area overheads. Ideally, we should be able to place arbitrary compressed cache blocks without any placement restrictions and tag area overheads. This paper proposes Touché, a framework that enables storing multiple arbitrary compressed cache blocks within a physical cacheline without any tag area overheads. The Touché framework consists of three components. The first component, called the “Signature” (SIGN) engine, creates shortened signatures from the tag addresses of compressed blocks. Due to this, the SIGN engine can store multiple signatures in each tag entry. On a cache access, the physical cacheline is accessed only if there is a signature match (which has a negligible probability of false positive). The second component, called the “Tag Appended Data” (TADA) mechanism, stores the full tag addresses with data. TADA enables Touché to detect false positive signature matches by ensuring that the actual tag address is available for comparison. The third component, called the “Superblock Marker” (SMARK) mechanism, uses a unique marker in the tag entry to indicate the occurrence of compressed cache blocks from neighboring physical addresses in the same cacheline. Touché is completely hardware-based and achieves an average speedup of 12% (ideal 13%) when compared to an uncompressed baseline.

READ FULL TEXT

page 1

page 3

page 4

page 6

page 7

page 10

page 11

research
09/18/2020

MIRAGE: Mitigating Conflict-Based Cache Attacks with a Practical Fully-Associative Design

Shared caches in processors are vulnerable to conflict-based side-channe...
research
04/07/2022

Forecasting lifetime and performance of a novel NVM last-level cache with compression

Non-volatile memory (NVM) technologies are interesting alternatives for ...
research
04/20/2022

L2C2: Last-Level Compressed-Cache NVM and a Procedure to Forecast Performance and Lifetime

Several emerging non-volatile (NV) memory technologies are rising as int...
research
07/04/2019

TicToc: Enabling Bandwidth-Efficient DRAM Caching for both Hits and Misses in Hybrid Memory Systems

This paper investigates bandwidth-efficient DRAM caching for hybrid DRAM...
research
03/05/2019

FUSE: Fusing STT-MRAM into GPUs to Alleviate Off-Chip Memory Access Overheads

In this work, we propose FUSE, a novel GPU cache system that integrates ...
research
06/24/2020

Fetch-Directed Instruction Prefetching Revisited

Prior work has observed that fetch-directed prefetching (FDIP) is highly...
research
10/27/2022

Perception-aware Tag Placement Planning for Robust Localization of UAVs in Indoor Construction Environments

Tag-based visual-inertial localization is a lightweight method for enabl...

Please sign up or login with your details

Forgot password? Click here to reset