Taming Process Variations in CNFET for Efficient Last Level Cache Design

08/11/2021
by   Dawen Xu, et al.
0

Carbon nanotube field-effect transistors (CNFET) emerge as a promising alternative to CMOS transistors for the much higher speed and energy efficiency, which makes the technology particularly suitable for building the energy-hungry last level cache (LLC). However, the process variations (PVs) in CNFET caused by the imperfect fabrication lead to large timing variation and the worst-case timing dramatically limits the LLC operation speed. Particularly, we observe that the CNFET-based cache latency distribution is closely related to the LLC layouts. For the two typical LLC layouts that have the CNT growth direction aligned to the cache way direction and cache set direction respectively, we proposed variation-aware set aligned (VASA) cache and variation-aware way aligned (VAWA) cache in combination with corresponding cache optimizations such as data shuffling and page mapping to enable low-latency cache for frequently used data. According to our experiments, the optimized LLC reduces the average access latency by 32 baseline designs on the two different CNFET layouts respectively while it improves the overall performance by 6% and 9% and reduces the energy consumption by 4 induced latency variation and PV incurred latency variation considered in a unified model, we extended the VAWA and VASA cache design for the CNFET-based NUCA and the proposed NUCA achieves both significant performance improvement and energy saving compared to the straightforward variation-aware NUCA.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 9

page 10

page 11

page 12

research
09/24/2020

A Study of Runtime Adaptive Prefetching for STTRAM L1 Caches

Spin-Transfer Torque RAM (STTRAM) is a promising alternative to SRAM in ...
research
04/25/2019

TS Cache: A Fast Cache with Timing-speculation Mechanism Under Low Supply Voltages

To mitigate the ever-worsening Power Wall problem, more and more applica...
research
05/18/2019

HALLS: An Energy-Efficient Highly Adaptable Last Level STT-RAM Cache for Multicore Systems

Spin-Transfer Torque RAM (STT-RAM) is widely considered a promising alte...
research
06/03/2018

Gemini: Reducing DRAM Cache Hit Latency by Hybrid Mappings

Die-stacked DRAM caches are increasingly advocated to bridge the perform...
research
12/23/2021

Using Silent Writes in Low-Power Traffic-Aware ECC

Using Error Detection Code (EDC) and Error Correction Code (ECC) is a no...
research
10/04/2021

HyGain: High Performance, Energy-Efficient Hybrid Gain Cell based Cache Hierarchy

In this paper, we propose a 'full-stack' solution to designing high capa...
research
04/19/2022

Characterization and Optimization of Integrated Silicon-Photonic Neural Networks under Fabrication-Process Variations

Silicon-photonic neural networks (SPNNs) have emerged as promising succe...

Please sign up or login with your details

Forgot password? Click here to reset