Motion perception based adaptive quantization for video coding

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

A visual measure for the purpose of video compressions is proposed in this paper. The novelty of the proposed scheme relies on combining three human perception models: motion attention model, eye movement based spatiotemporal visual sensitivity function, and visual masking model. With the aid of spatiotemporal visual sensitivity function, the visual sensitivities to DCT coefficients on less attended macroblocks are evaluated. The spatiotemporal distortion masking measures at macroblock level are then estimated based on the visual masking thresholds of the DCT coefficients with low sensitivities. Accordingly, macroblocks that can hide more distortions are assigned larger quantization parameters. Experiments conducted on the basis of H.264 demonstrate that this scheme effectively improves coding efficiency without picture quality degradation.

Original languageEnglish
Title of host publicationAdvances in Mulitmedia Information Processing - PCM 2005 - 6th Pacific Rim Conference on Multimedia, Proceedings
Pages132-143
Number of pages12
DOIs
StatePublished - 2005
Event6th Pacific Rim Conference on Multimedia - Advances in Mulitmedia Information Processing - PCM 2005 - Jeju Island, Korea, Republic of
Duration: 13 Nov 200516 Nov 2005

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3767 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference6th Pacific Rim Conference on Multimedia - Advances in Mulitmedia Information Processing - PCM 2005
Country/TerritoryKorea, Republic of
CityJeju Island
Period13/11/0516/11/05

Fingerprint

Dive into the research topics of 'Motion perception based adaptive quantization for video coding'. Together they form a unique fingerprint.

Cite this