Outdoor walking guide for the visually-impaired people based on semantic segmentation and depth map

I. Hsuan Hsieh, Hsiao Chu Cheng, Hao Hsiang Ke, Hsiang Chieh Chen, Wen June Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Scopus citations

Abstract

In this study, we proposed a wearable guiding system, which contains an embedded system-Jetson AGX Xavier launched by Nvidia and a RGB-D binocular depth camera-Stereolabs ZED2, for guiding visually-impaired people to walk outdoors. Using the deep learning image segmentation model and the depth map obtained by the ZED2, the front image of the blind is divided into seven divisions. Each division has its confidence of walkability which is computed by our specific methods. Based on the confidence of walkability, the most suitable direction for the visually-impaired people is selected and voice prompts are played to lead the visually-impaired people walking forward on the sidewalk or walking crosswalk to cross the road safely. An experiment is performed to verify the effectiveness of the proposed system.

Original languageEnglish
Title of host publicationProceedings - 2020 International Conference on Pervasive Artificial Intelligence, ICPAI 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages144-147
Number of pages4
ISBN (Electronic)9781665404839
DOIs
StatePublished - Dec 2020
Event1st International Conference on Pervasive Artificial Intelligence, ICPAI 2020 - Taipei, Taiwan
Duration: 3 Dec 20205 Dec 2020

Publication series

NameProceedings - 2020 International Conference on Pervasive Artificial Intelligence, ICPAI 2020

Conference

Conference1st International Conference on Pervasive Artificial Intelligence, ICPAI 2020
Country/TerritoryTaiwan
CityTaipei
Period3/12/205/12/20

Keywords

  • deep learning
  • depth map
  • obstacle avoidance
  • semantic segmentation
  • visually-impaired people
  • wearable device

Fingerprint

Dive into the research topics of 'Outdoor walking guide for the visually-impaired people based on semantic segmentation and depth map'. Together they form a unique fingerprint.

Cite this