Adaptive submodular inverse reinforcement learning for spatial search and map exploration

Ji Jie Wu, Kuo Shih Tseng

研究成果: 雜誌貢獻期刊論文同行評審

4 引文 斯高帕斯(Scopus)

摘要

Finding optimal paths for spatial search and map exploration problems are NP-hard. Since spatial search and environmental exploration are parts of human central activities, learning human behavior from data is a way to solve these problems. Utilizing the adaptive submodularity of two problems, this research proposes an adaptive submodular inverse reinforcement learning (ASIRL) algorithm to learn human behavior. The ASIRL approach is to learn the reward functions in the Fourier domain and then recover it in the spatial domain. The near-optimal path can be computed through learned reward functions. The experiments demonstrate that the ASIRL outperforms state of the art approaches (e.g., REWARDAGG and QVALAGG).

原文???core.languages.en_GB???
頁(從 - 到)321-347
頁數27
期刊Autonomous Robots
46
發行號2
DOIs
出版狀態已出版 - 2月 2022

指紋

深入研究「Adaptive submodular inverse reinforcement learning for spatial search and map exploration」主題。共同形成了獨特的指紋。

引用此