R3-DLA (Reduce, Reuse, Recycle): A More Efficient Approach to Decoupled Look-Ahead Architectures

12/11/2018
by   Sushant Kondguli, et al.
0

Modern societies have developed insatiable demands for more computation capabilities. Exploiting implicit parallelism to provide automatic performance improvement remains a central goal in engineering future general-purpose computing systems. One approach is to use a separate thread context to perform continuous look-ahead to improve the data and instruction supply to the main pipeline. Such a decoupled look-ahead (DLA) architecture can be quite effective in accelerating a broad range of applications in a relatively straightforward implementation. It also has broad design flexibility as the look-ahead agent need not be concerned with correctness constraints. In this paper, we explore a number of optimizations that make the look-ahead agent more efficient and yet extract more utility from it. With these optimizations, a DLA architecture can achieve an average speedup of 1.4 over a state-of-the-art microarchitecture for a broad set of benchmark suites, making it a powerful tool to enhance single-thread performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2020

Anytime and Efficient Coalition Formation with Spatial and Temporal Constraints

The Coalition Formation with Spatial and Temporal constraints Problem (C...
research
03/18/2022

Look-Ahead Acquisition Functions for Bernoulli Level Set Estimation

Level set estimation (LSE) is the problem of identifying regions where a...
research
07/04/2020

Playing Chess with Limited Look Ahead

We have seen numerous machine learning methods tackle the game of chess ...
research
11/16/2018

Exploring Tradeoffs in Models for Low-latency Speech Enhancement

We explore a variety of neural networks configurations for one- and two-...
research
07/16/2021

Look Ahead ORAM: Obfuscating Addresses in Recommendation Model Training

In the cloud computing era, data privacy is a critical concern. Memory a...
research
12/21/2017

The Character Thinks Ahead: creative writing with deep learning nets and its stylistic assessment

We discuss how to control outputs from deep learning models of text corp...
research
02/06/2013

Exploring Parallelism in Learning Belief Networks

It has been shown that a class of probabilistic domain models cannot be ...

Please sign up or login with your details

Forgot password? Click here to reset