Automatic complexity reduction in reinforcement learning

Chung Cheng Chiu*, Von Wun Soo

*Corresponding author for this work

Research output: Contribution to journalJournal Article peer-review

10 Scopus citations

Abstract

High dimensionality of state representation is a major limitation for scale-up in reinforcement learning (RL). This work derives the knowledge of complexity reduction from partial solutions and provides algorithms for automated dimension reduction in RL. We propose the cascading decomposition algorithm based on the spectral analysis on a normalized graph Laplacian to decompose a problem into several subproblems and then conduct parameter relevance analysis on each subproblem to perform dynamic state abstraction. The elimination of irrelevant parameters projects the original state space into the one with lower dimension in which some subtasks are projected onto the same shared subtasks. The framework could identify irrelevant parameters based on performed action sequences and thus relieve the problem of high dimensionality in learning process. We evaluate the framework with experiments and show that the dimension reduction approach could indeed make some infeasible problem to become learnable.

Original languageEnglish
Pages (from-to)1-25
Number of pages25
JournalComputational Intelligence
Volume26
Issue number1
DOIs
StatePublished - 02 2010
Externally publishedYes

Keywords

  • Complexity reduction
  • Reinforcement learning
  • Spectral graph theory

Fingerprint

Dive into the research topics of 'Automatic complexity reduction in reinforcement learning'. Together they form a unique fingerprint.

Cite this