News video story segmentation using fusion of multi-level multi-modal features in TRECVID 2003

W. Hsu, L. Kennedy, C. W. Huang, S. F. Chang, C. Y. Lin, G. Iyengar

研究成果: 雜誌貢獻會議論文同行評審

25 引文 斯高帕斯(Scopus)

摘要

In this paper, we present our new results in news video story segmentation and classification in the context of TRECVID video retrieval benchmarking event 2003. We applied and extended the Maximum Entropy statistical model to effectively fuse diverse features from multiple levels and modalities, including visual, audio, and text. We have included various features such as motion, face, music/speech types, prosody, and high-level text segmentation information. The statistical fusion model is used to automatically discover relevant features contributing to the detection of story boundaries. One novel aspect of our method is the use of a feature wrapper to address different types of features - asynchronous, discrete, continuous and delta ones. We also developed several novel features related to prosody. Using the large news video set from the TRECVID 2003 benchmark, we demonstrate satisfactory performance (F1 measure up to 0.76 ) and more importantly observe an interesting opportunity for further improvement.

原文???core.languages.en_GB???
頁(從 - 到)III645-III648
期刊ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
3
出版狀態已出版 - 2004
事件Proceedings - IEEE International Conference on Acoustics, Speech, and Signal Processing - Montreal, Que, Canada
持續時間: 17 5月 200421 5月 2004

指紋

深入研究「News video story segmentation using fusion of multi-level multi-modal features in TRECVID 2003」主題。共同形成了獨特的指紋。

引用此