Cache Where you Want! Reconciling Predictability and Coherent Caching

09/11/2019
by   Ayoosh Bansal, et al.
0

Real-time and cyber-physical systems need to interact with and respond to their physical environment in a predictable time. While multicore platforms provide incredible computational power and throughput, they also introduce new sources of unpredictability. Large fluctuations in latency to access data shared between multiple cores is an important contributor to the overall execution-time variability. In addition to the temporal unpredictability introduced by caching, parallel applications with data shared across multiple cores also pay additional latency overheads due to data coherence. Analyzing the impact of data coherence on the worst-case execution-time of real-time applications is challenging because only scarce implementation details are revealed by manufacturers. This paper presents application level control for caching data at different levels of the cache hierarchy. The rationale is that by caching data only in shared cache it is possible to bypass private caches. The access latency to data present in caches becomes independent of its coherence state. We discuss the existing architectural support as well as the required hardware and OS modifications to support the proposed cacheability control. We evaluate the system on an architectural simulator. We show that the worst case execution time for a single memory write request is reduced by 52 Benchmark evaluations show that proposed technique has a minimal impact on average performance.

READ FULL TEXT
research
06/23/2017

Predictable Cache Coherence for Multi-Core Real-Time Systems

This work addresses the challenge of allowing simultaneous and predictab...
research
06/23/2017

HourGlass: Predictable Time-based Cache Coherence Protocol for Dual-Critical Multi-Core Systems

We present a hardware mechanism called HourGlass to predictably share da...
research
01/09/2014

Performance Impact of Lock-Free Algorithms on Multicore Communication APIs

Data race conditions in multi-tasking software applications are prevente...
research
09/13/2018

Do Your Cores Play Nicely? A Portable Framework for Multi-core Interference Tuning and Analysis

Multi-core architectures can be leveraged to allow independent processes...
research
09/26/2017

Flexible Support for Fast Parallel Commutative Updates

Privatizing data is a useful strategy for increasing parallelism in a sh...
research
02/26/2021

SLAP: A Split Latency Adaptive VLIW pipeline architecture which enables on-the-fly variable SIMD vector-length

Over the last decade the relative latency of access to shared memory by ...
research
05/30/2018

BUNDLEP: Prioritizing Conflict Free Regions in Multi-Threaded Programs to Improve Cache Reuse -- Extended Results and Technical Report

In BUNDLE: Real-Time Multi-Threaded Scheduling to Reduce Cache Contentio...

Please sign up or login with your details

Forgot password? Click here to reset