Predicting Memory Compiler Performance Outputs using Feed-Forward Neural Networks

03/05/2020
by   Felix Last, et al.
0

Typical semiconductor chips include thousands of mostly small memories. As memories contribute an estimated 25 and area (PPA) of a chip, memories must be designed carefully to meet the system's requirements. Memory arrays are highly uniform and can be described by approximately 10 parameters depending mostly on the complexity of the periphery. Thus, to improve PPA utilization, memories are typically generated by memory compilers. A key task in the design flow of a chip is to find optimal memory compiler parametrizations which on the one hand fulfill system requirements while on the other hand optimize PPA. Although most compiler vendors also provide optimizers for this task, these are often slow or inaccurate. To enable efficient optimization in spite of long compiler run times, we propose training fully connected feed-forward neural networks to predict PPA outputs given a memory compiler parametrization. Using an exhaustive search-based optimizer framework which obtains neural network predictions, PPA-optimal parametrizations are found within seconds after chip designers have specified their requirements. Average model prediction errors of less than 3 optimizer for successful, large volume chip design projects illustrate the effectiveness of the approach.

READ FULL TEXT
research
08/01/2017

Natural Language Processing with Small Feed-Forward Networks

We show that small and shallow feed-forward neural networks can achieve ...
research
12/29/2015

Feed-Forward Networks with Attention Can Solve Some Long-Term Memory Problems

We propose a simplified model of attention which is applicable to feed-f...
research
04/02/2021

TreeToaster: Towards an IVM-Optimized Compiler

A compiler's optimizer operates over abstract syntax trees (ASTs), conti...
research
02/24/2022

Demonstrating BrainScaleS-2 Inter-Chip Pulse-Communication using EXTOLL

The BrainScaleS-2 (BSS-2) Neuromorphic Computing System currently consis...
research
06/28/2022

Memory Safe Computations with XLA Compiler

Software packages like TensorFlow and PyTorch are designed to support li...
research
07/12/2016

Scratchpad Sharing in GPUs

GPGPU applications exploit on-chip scratchpad memory available in the Gr...
research
05/09/2022

Towards Optimal VPU Compiler Cost Modeling by using Neural Networks to Infer Hardware Performances

Calculating the most efficient schedule of work in a neural network comp...

Please sign up or login with your details

Forgot password? Click here to reset