SOM-based fuzzy systems for Q-learning in continuous state and action space

Mu Chun Su, Lu Yu Chen, De Yuan Huang

Research output: Contribution to journalReview articlepeer-review

Abstract

Q-learning is one popular approach to reinforcement learning. It is widely applied to problems with discrete states and actions and usually implemented by a look-up table where each item corresponds to a combination of a state and an action. The look-up table implementation of Q-learning fails in problems with continuous state and action space because an exhaustive enumeration of all state-action pairs is impossible. In this paper, we propose to use a SOM-based fuzzy system to implement Q-learning for solving problems with continuous state and action space. Simulations of the navigation of a robot are used to demonstrate the effectiveness of the proposed approach. In order to accelerate the learning procedure, we also propose to use a hybrid approach which integrates the advantages of the ideas of hierarchical learning and the progressive learning to decompose a complex task into simple elementary tasks and then use a simple coordinate mechanism to coordinate the elementary skills to achieve the final main goal.

Original languageEnglish
Pages (from-to)2772-2777
Number of pages6
JournalWSEAS Transactions on Computers
Volume5
Issue number11
StatePublished - Nov 2006

Keywords

  • Actor-critic learning
  • Q-learning
  • Reinforcement learning
  • Robot navigation
  • Task decomposition

Fingerprint

Dive into the research topics of 'SOM-based fuzzy systems for Q-learning in continuous state and action space'. Together they form a unique fingerprint.

Cite this