Encoder-Recurrent Decoder Network for Single Image Dehazing

An Dang, Toan H. Vu, Jia Ching Wang

研究成果: 書貢獻/報告類型會議論文篇章同行評審

2 引文 斯高帕斯(Scopus)

摘要

This paper develops a deep learning model, called Encoder-Recurrent Decoder Network (ERDN), which recovers the clear image from a degrade hazy image without using the atmospheric scattering model. The proposed model consists of two key components-an encoder and a decoder. The encoder is constructed by a residual efficient spatial pyramid (rESP) module such that it can effectively process hazy images at any resolution to extract relevant features at multiple contextual levels. The decoder has a recurrent module which sequentially aggregates encoded features from high levels to low levels to generate haze-free images. The network is trained end-to-end given pairs of hazy-clear images. Experimental results on the RESIDE-Standard dataset demonstrate that the proposed model achieves a competitive dehazing performance compared to the state-of-the-art methods in term of PSNR and SSIM.

原文???core.languages.en_GB???
主出版物標題2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings
發行者Institute of Electrical and Electronics Engineers Inc.
頁面4432-4436
頁數5
ISBN(電子)9781509066315
DOIs
出版狀態已出版 - 5月 2020
事件2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Barcelona, Spain
持續時間: 4 5月 20208 5月 2020

出版系列

名字ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
2020-May
ISSN(列印)1520-6149

???event.eventtypes.event.conference???

???event.eventtypes.event.conference???2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
國家/地區Spain
城市Barcelona
期間4/05/208/05/20

指紋

深入研究「Encoder-Recurrent Decoder Network for Single Image Dehazing」主題。共同形成了獨特的指紋。

引用此