Global-and-local context network for semantic segmentation of street view images

Chih Yang Lin, Yi Cheng Chiu, Hui Fuang Ng, Timothy K. Shih, Kuan Hung Lin

研究成果: 雜誌貢獻期刊論文同行評審

20 引文 斯高帕斯(Scopus)

摘要

Semantic segmentation of street view images is an important step in scene understanding for autonomous vehicle systems. Recent works have made significant progress in pixel-level labeling using Fully Convolutional Network (FCN) framework and local multi-scale context information. Rich global context information is also essential in the segmentation process. However, a systematic way to utilize both global and local contextual information in a single network has not been fully investigated. In this paper, we propose a global-and-local network architecture (GLNet) which incorporates global spatial information and dense local multi-scale context information to model the relationship between objects in a scene, thus reducing segmentation errors. A channel attention module is designed to further refine the segmentation results using low-level features from the feature map. Experimental results demonstrate that our proposed GLNet achieves 80.8% test accuracy on the Cityscapes test dataset, comparing favorably with existing state-of-the-art methods.

原文???core.languages.en_GB???
文章編號2907
期刊Sensors (Switzerland)
20
發行號10
DOIs
出版狀態已出版 - 2 5月 2020

指紋

深入研究「Global-and-local context network for semantic segmentation of street view images」主題。共同形成了獨特的指紋。

引用此