Transformer Network-based Reinforcement Learning Method for Power Distribution Network (PDN) Optimization of High Bandwidth Memory (HBM)

03/29/2022
by   HyunWook Park, et al.
0

In this article, for the first time, we propose a transformer network-based reinforcement learning (RL) method for power distribution network (PDN) optimization of high bandwidth memory (HBM). The proposed method can provide an optimal decoupling capacitor (decap) design to maximize the reduction of PDN self- and transfer impedance seen at multiple ports. An attention-based transformer network is implemented to directly parameterize decap optimization policy. The optimality performance is significantly improved since the attention mechanism has powerful expression to explore massive combinatorial space for decap assignments. Moreover, it can capture sequential relationships between the decap assignments. The computing time for optimization is dramatically reduced due to the reusable network on positions of probing ports and decap assignment candidates. This is because the transformer network has a context embedding process to capture meta-features including probing ports positions. In addition, the network is trained with randomly generated data sets. Therefore, without additional training, the trained network can solve new decap optimization problems. The computing time for training and data cost are critically decreased due to the scalability of the network. Thanks to its shared weight property, the network can adapt to a larger scale of problems without additional training. For verification, we compare the results with conventional genetic algorithm (GA), random search (RS), and all the previous RL-based methods. As a result, the proposed method outperforms in all the following aspects: optimality performance, computing time, and data efficiency.

READ FULL TEXT

page 1

page 15

page 16

research
06/14/2022

Transformers are Meta-Reinforcement Learners

The transformer architecture and variants presented remarkable success a...
research
05/06/2022

Learning Scalable Policies over Graphs for Multi-Robot Task Allocation using Capsule Attention Networks

This paper presents a novel graph reinforcement learning (RL) architectu...
research
10/25/2022

In-context Reinforcement Learning with Algorithm Distillation

We propose Algorithm Distillation (AD), a method for distilling reinforc...
research
10/26/2020

Track-Assignment Detailed Routing Using Attention-based Policy Model With Supervision

Detailed routing is one of the most critical steps in analog circuit des...
research
09/22/2022

Reinforcement Learning in Computing and Network Convergence Orchestration

As computing power is becoming the core productivity of the digital econ...
research
01/18/2023

Human-Timescale Adaptation in an Open-Ended Task Space

Foundation models have shown impressive adaptation and scalability in su...
research
12/27/2021

A Graph Attention Learning Approach to Antenna Tilt Optimization

6G will move mobile networks towards increasing levels of complexity. To...

Please sign up or login with your details

Forgot password? Click here to reset