Improving Compressed Video Using Single Lightweight Model with Temporal Fusion Module †

Tien Ying Kuo, Yu Jen Wei, Po Chyi Su, Chang Hao Chao

Research output: Contribution to journalArticlepeer-review

Abstract

Video compression algorithms are commonly used to reduce the number of bits required to represent a video with a high compression ratio. However, this can result in the loss of content details and visual artifacts that affect the overall quality of the video. We propose a learning-based restoration method to address this issue, which can handle varying degrees of compression artifacts with a single model by predicting the difference between the original and compressed video frames to restore video quality. To achieve this, we adopted a recursive neural network model with dilated convolution, which increases the receptive field of the model while keeping the number of parameters low, making it suitable for deployment on a variety of hardware devices. We also designed a temporal fusion module and integrated the color channels into the objective function. This enables the model to analyze temporal correlation and repair chromaticity artifacts. Despite handling color channels, and unlike other methods that have to train a different model for each quantization parameter (QP), the number of parameters in our lightweight model is kept to only about 269 k, requiring only about one-twelfth of the parameters used by other methods. Our model applied to the HEVC test model (HM) improves the compressed video quality by an average of 0.18 dB of BD-PSNR and −5.06% of BD-BR.

Original languageEnglish
Article number4511
JournalSensors (Switzerland)
Volume23
Issue number9
DOIs
StatePublished - May 2023

Keywords

  • compression artifacts removal
  • deep learning
  • video coding

Fingerprint

Dive into the research topics of 'Improving Compressed Video Using Single Lightweight Model with Temporal Fusion Module †'. Together they form a unique fingerprint.

Cite this