Improving Compressed Video Using Single Lightweight Model with Temporal Fusion Module †

Tien Ying Kuo, Yu Jen Wei, Po Chyi Su, Chang Hao Chao

研究成果: 雜誌貢獻期刊論文同行評審

摘要

Video compression algorithms are commonly used to reduce the number of bits required to represent a video with a high compression ratio. However, this can result in the loss of content details and visual artifacts that affect the overall quality of the video. We propose a learning-based restoration method to address this issue, which can handle varying degrees of compression artifacts with a single model by predicting the difference between the original and compressed video frames to restore video quality. To achieve this, we adopted a recursive neural network model with dilated convolution, which increases the receptive field of the model while keeping the number of parameters low, making it suitable for deployment on a variety of hardware devices. We also designed a temporal fusion module and integrated the color channels into the objective function. This enables the model to analyze temporal correlation and repair chromaticity artifacts. Despite handling color channels, and unlike other methods that have to train a different model for each quantization parameter (QP), the number of parameters in our lightweight model is kept to only about 269 k, requiring only about one-twelfth of the parameters used by other methods. Our model applied to the HEVC test model (HM) improves the compressed video quality by an average of 0.18 dB of BD-PSNR and −5.06% of BD-BR.

原文???core.languages.en_GB???
文章編號4511
期刊Sensors (Switzerland)
23
發行號9
DOIs
出版狀態已出版 - 5月 2023

指紋

深入研究「Improving Compressed Video Using Single Lightweight Model with Temporal Fusion Module †」主題。共同形成了獨特的指紋。

引用此