Abstract
Macro placement plays a crucial role in physical design, directly influencing power, performance, and area. While existing approaches, such as optimization-based and learning-based methods, have made progress, they still face significant challenges in terms of exploration efficiency and optimization capability. In this paper, we introduce an enhanced reinforcement learning (RL)-based framework integrated with Monte Carlo Tree Search (MCTS). A dynamic node selection strategy is used to optimize frontier nodes in MCTS for macro placement, while a curiosity-driven exploration mechanism generates intrinsic rewards to enhance the efficiency of exploring diverse placement solutions. Additionally, prioritized experience replay focuses on key placement states, further improving optimization performance. Experimental results on the ISPD2005 benchmark demonstrate that our placement wirelength outperforms recent state-of-the-art methods.
| Original language | English |
|---|---|
| Title of host publication | GLSVLSI 2025 - Proceedings of the Great Lakes Symposium on VLSI 2025 |
| Publisher | Association for Computing Machinery |
| Pages | 764-769 |
| Number of pages | 6 |
| ISBN (Electronic) | 9798400714962 |
| DOIs | |
| State | Published - 29 06 2025 |
| Event | 35th Edition of the Great Lakes Symposium on VLSI 2025, GLSVLSI 2025 - New Orleans, United States Duration: 30 06 2025 → 02 07 2025 |
Publication series
| Name | Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI |
|---|
Conference
| Conference | 35th Edition of the Great Lakes Symposium on VLSI 2025, GLSVLSI 2025 |
|---|---|
| Country/Territory | United States |
| City | New Orleans |
| Period | 30/06/25 → 02/07/25 |
Bibliographical note
Publisher Copyright:© 2025 Copyright held by the owner/author(s).
Keywords
- Macro Placement
- Monte Carlo Tree Search
- Reinforcement Learning