Projects per year
Abstract
Finding optimal paths for spatial search and map exploration problems are NP-hard. Since spatial search and environmental exploration are parts of human central activities, learning human behavior from data is a way to solve these problems. Utilizing the adaptive submodularity of two problems, this research proposes an adaptive submodular inverse reinforcement learning (ASIRL) algorithm to learn human behavior. The ASIRL approach is to learn the reward functions in the Fourier domain and then recover it in the spatial domain. The near-optimal path can be computed through learned reward functions. The experiments demonstrate that the ASIRL outperforms state of the art approaches (e.g., REWARDAGG and QVALAGG).
Original language | English |
---|---|
Pages (from-to) | 321-347 |
Number of pages | 27 |
Journal | Autonomous Robots |
Volume | 46 |
Issue number | 2 |
DOIs | |
State | Published - Feb 2022 |
Keywords
- Adaptive submodularity
- Compressed sensing
- Inverse reinforcement learning
- Map exploration
- Spatial search
Fingerprint
Dive into the research topics of 'Adaptive submodular inverse reinforcement learning for spatial search and map exploration'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Deep Inverse Reinforcement Learning for Informative Path Planning(3/3)
1/08/21 → 31/07/22
Project: Research