Automatic dynamic texture transformation based on a new motion coherence metric

Kanoksak Wattanachote, Timothy K. Shih

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Changing dynamic texture appearances can create new looks in both the motion and color appearances of videos. Dynamic textures with sophisticated shape and motion appearance are difficult to represent by physical models and are difficult to predict, especially for transformation to new motion texture. We propose a dynamic texture transformation algorithm for the video sequences based on the motion coherence of patches. We successfully apply the technology in many special effect videos, using the interactive tool we developed. In this paper, we address the issues of 3-D patch creation, motion coherence analysis, and patch matching for the dynamic texture transformation. The main contribution includes two issues. First, we propose a new metric for evaluating motion coherence with solid tests to justify the usefulness (close to human eye perception). Second, the proposed algorithm for an automatic dynamic texture transformation only needs users to segment textures on the first frame using an optional threshold to identify the texture area. The rest of the process is completed automatically. The experimental results show that the motion coherence index is effectively used to find the coherent motion region for patch matching and transformation. The experimental results, test data, source code, and system demonstration videos are posted at http://video.minelab.tw/DTT/index.html.

Original languageEnglish
Article number7214258
Pages (from-to)1805-1820
Number of pages16
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume26
Issue number10
DOIs
StatePublished - Oct 2016

Keywords

  • Dynamic texture transformation
  • Motion coherence analysis
  • Motion template matching
  • Special effects
  • Video editing

Fingerprint

Dive into the research topics of 'Automatic dynamic texture transformation based on a new motion coherence metric'. Together they form a unique fingerprint.

Cite this