Evaluating Row Buffer Locality in Future Non-Volatile Main Memories
DRAM-based main memories have read operations that destroy the read data, and as a result, must buffer large amounts of data on each array access to keep chip costs low. Unfortunately, system-level trends such as increased memory contention in multi-core architectures and data mapping schemes that improve memory parallelism may cause only a small amount of the buffered data to be accessed. This makes buffering large amounts of data on every memory array access energy-inefficient. Emerging non-volatile memories (NVMs) such as PCM, STT-RAM, and RRAM, however, do not have destructive read operations, opening up opportunities for employing small row buffers without incurring additional area penalty and/or design complexity. In this work, we discuss architectural changes to enable small row buffers at a low cost in NVMs. We provide a memory access protocol, energy model, and timing model to enable further system-level evaluation. We evaluate the system-level tradeoffs of employing different row buffer sizes in NVM main memories in terms of energy, performance, and endurance, with different data mapping schemes. We find that on a multi-core CMP system, reducing the row buffer size can greatly reduce main memory dynamic energy compared to a DRAM baseline with large row sizes, without greatly affecting endurance, and for some memories, leads to improved performance.
READ FULL TEXT