Monocular vision-based obstacle detection and avoidance for a multicopter

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

This article presents a monocular vision-based algorithm for detecting obstacles and identifying obstacle-aware regions, developed to be used for collision avoidance by a multicopter. The first step of our algorithm is to predict a disparity image from a single-view image via implementing a deep encoder-decoder network. All pixels in this disparity prediction are then categorized as one of three classes, obstacle, road, or obstacle-free, by combining V-disparity analysis and a fuzzy inference system. For pixels belonging to obstacle objects, obstacle-aware regions are generated within the field of visual perception. To accommodate the safety margins of a multicopter, intermediate waypoints are then added to obtain a new flyable path that passes through an unknown environment safely. Experimental results verified the effectiveness of the detection of obstacles and the identification of obstacle-aware regions. The accuracy of disparity prediction and monocular depth estimation were quantitatively compared to support the feasibility of monocular vision in obstacle avoidance. Furthermore, the entire algorithm was successfully tested on a robotic platform, autonomously flying a hexacopter in an outdoor space with obstacles. In conclusion, the proposed monocular algorithm performs well for obstacle detection and depth estimation and is potentially an alternative to a binocular solution.

Original languageEnglish
Article number8903271
Pages (from-to)167869-167883
Number of pages15
JournalIEEE Access
Volume7
DOIs
StatePublished - 2019

Keywords

  • Collision avoidance
  • machine learning
  • monocular depth estimation
  • multicopter
  • obstacle detection
  • robot vision systems
  • unmanned aerial vehicles

Fingerprint

Dive into the research topics of 'Monocular vision-based obstacle detection and avoidance for a multicopter'. Together they form a unique fingerprint.

Cite this