Detection and Recognition is a well-known topic in computer vision that still faces many unresolved issues. One of the main contributions of this research is a method to guide blind people around an outdoor environment with the assistance of a ZED stereo camera, a camera that can calculate depth information. In this paper, we propose a deep attention network to automatically detect and recognize objects. The objects are not only limited to general people or cars, but include convenience stores and traffic lights as well, in order to help blind people cross a road and make purchases in a store. Since public datasets are limited, we also create a novel dataset with images captured by the ZED stereo camera and collected from Google Street View. When testing with images of different resolutions, our method achieves an accuracy rate of about 81%, which is better than naive YOLO v3.