Abstract
In this study, we propose an assistive system for helping visually impaired people walk outdoors. This assistive system contains an embedded system—Jetson AGX Xavier (manufacture by Nvidia in Santa Clara, CA, USA) and a binocular depth camera—ZED 2 (manufacture by Stereolabs in San Francisco, CA, USA). Based on the CNN neural network FAST-SCNN and the depth map obtained by the ZED 2, the image of the environment in front of the visually impaired user is split into seven equal divisions. A walkability confidence value for each division is computed, and a voice prompt is played to guide the user toward the most appropriate direction such that the visually impaired user can navigate a safe path on the sidewalk, avoid any obstacles, or walk on the crosswalk safely. Furthermore, the obstacle in front of the user is identified by the network YOLOv5s proposed by Jocher, G. et al. Finally, we provided the proposed assistive system to a visually impaired person and experimented around an MRT station in Taiwan. The visually impaired person indicated that the proposed system indeed helped him feel safer when walking outdoors. The experiment also verified that the system could effectively guide the visually impaired person walking safely on the sidewalk and crosswalks.
Original language | English |
---|---|
Article number | 10026 |
Journal | Applied Sciences (Switzerland) |
Volume | 11 |
Issue number | 21 |
DOIs | |
State | Published - 1 Nov 2021 |
Keywords
- Deep learning
- Depth map
- Obstacle avoidance
- Semantic segmentation
- Visually impaired people
- Wearable device