每年專案
摘要
Semantic segmentation of street view images is an important step in scene understanding for autonomous vehicle systems. Recent works have made significant progress in pixel-level labeling using Fully Convolutional Network (FCN) framework and local multi-scale context information. Rich global context information is also essential in the segmentation process. However, a systematic way to utilize both global and local contextual information in a single network has not been fully investigated. In this paper, we propose a global-and-local network architecture (GLNet) which incorporates global spatial information and dense local multi-scale context information to model the relationship between objects in a scene, thus reducing segmentation errors. A channel attention module is designed to further refine the segmentation results using low-level features from the feature map. Experimental results demonstrate that our proposed GLNet achieves 80.8% test accuracy on the Cityscapes test dataset, comparing favorably with existing state-of-the-art methods.
原文 | ???core.languages.en_GB??? |
---|---|
文章編號 | 2907 |
期刊 | Sensors (Switzerland) |
卷 | 20 |
發行號 | 10 |
DOIs | |
出版狀態 | 已出版 - 2 5月 2020 |
指紋
深入研究「Global-and-local context network for semantic segmentation of street view images」主題。共同形成了獨特的指紋。專案
- 2 已完成