SOM-based fuzzy systems for Q-learning in continuous state and action space

Mu Chun Su, Lu Yu Chen, De Yuan Huang

研究成果: 雜誌貢獻回顧評介論文同行評審

摘要

Q-learning is one popular approach to reinforcement learning. It is widely applied to problems with discrete states and actions and usually implemented by a look-up table where each item corresponds to a combination of a state and an action. The look-up table implementation of Q-learning fails in problems with continuous state and action space because an exhaustive enumeration of all state-action pairs is impossible. In this paper, we propose to use a SOM-based fuzzy system to implement Q-learning for solving problems with continuous state and action space. Simulations of the navigation of a robot are used to demonstrate the effectiveness of the proposed approach. In order to accelerate the learning procedure, we also propose to use a hybrid approach which integrates the advantages of the ideas of hierarchical learning and the progressive learning to decompose a complex task into simple elementary tasks and then use a simple coordinate mechanism to coordinate the elementary skills to achieve the final main goal.

原文???core.languages.en_GB???
頁(從 - 到)2772-2777
頁數6
期刊WSEAS Transactions on Computers
5
發行號11
出版狀態已出版 - 11月 2006

指紋

深入研究「SOM-based fuzzy systems for Q-learning in continuous state and action space」主題。共同形成了獨特的指紋。

引用此