A distributional perspective on value function factorization methods for multi-agent reinforcement learning

Wei Fang Sun, Cheng Kuang Lee, Chun Yi Lee

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Distributional reinforcement learning (RL) provides beneficial impacts for the single-agent domain. However, distributional RL methods are not directly compatible with value function factorization methods for multi-agent reinforcement learning. This work provides a distributional perspective on value function factorization, offering a solution for bridging the gap between distributional RL and value function factorization methods.

Original languageEnglish
Title of host publication20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages1659-1661
Number of pages3
ISBN (Electronic)9781713832621
StatePublished - 2021
Externally publishedYes
Event20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021 - Virtual, Online
Duration: 03 05 202107 05 2021

Publication series

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume3
ISSN (Print)1548-8403
ISSN (Electronic)1558-2914

Conference

Conference20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021
CityVirtual, Online
Period03/05/2107/05/21

Bibliographical note

Publisher Copyright:
© 2021 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.

Keywords

  • Distributional RL
  • Multi-Agent RL
  • Reinforcement Learning

Fingerprint

Dive into the research topics of 'A distributional perspective on value function factorization methods for multi-agent reinforcement learning'. Together they form a unique fingerprint.

Cite this