每年專案
摘要
Finding optimal paths for spatial search and map exploration problems are NP-hard. Since spatial search and environmental exploration are parts of human central activities, learning human behavior from data is a way to solve these problems. Utilizing the adaptive submodularity of two problems, this research proposes an adaptive submodular inverse reinforcement learning (ASIRL) algorithm to learn human behavior. The ASIRL approach is to learn the reward functions in the Fourier domain and then recover it in the spatial domain. The near-optimal path can be computed through learned reward functions. The experiments demonstrate that the ASIRL outperforms state of the art approaches (e.g., REWARDAGG and QVALAGG).
原文 | ???core.languages.en_GB??? |
---|---|
頁(從 - 到) | 321-347 |
頁數 | 27 |
期刊 | Autonomous Robots |
卷 | 46 |
發行號 | 2 |
DOIs | |
出版狀態 | 已出版 - 2月 2022 |
指紋
深入研究「Adaptive submodular inverse reinforcement learning for spatial search and map exploration」主題。共同形成了獨特的指紋。專案
- 1 已完成