Semi-automatic 2D to 3D video conversion based on relative velocity Estimation

Tsung Han Tsai, Chen Shuo Fan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

we propose a semi-automatic method for requiring low computation resources and precisely generating depth map using on 2D to 3D conversion. Firstly, the user-defined points in the first frame background scene of 2D video sequence are used to extract vanishing line. Then, the background depth map is determined by vanishing line tracking and the moving objects are assigned with depth values according to their position in the background. Finally, the motion based depth map and the geometry based map are integrated into one depth map by a depth fusion algorithm. With the depth map and original 2D video, a 3D video is constructed.

Original languageEnglish
Title of host publicationProceedings - 2nd IEEE International Conference on Mobile Cloud Computing, Services, and Engineering, MobileCloud 2014
PublisherIEEE Computer Society
Pages248-249
Number of pages2
ISBN (Print)9781479925049
DOIs
StatePublished - 2014
Event2nd IEEE International Conference on Mobile Cloud Computing, Services, and Engineering, MobileCloud 2014 - Oxford, United Kingdom
Duration: 7 Apr 201410 Apr 2014

Publication series

NameProceedings - 2nd IEEE International Conference on Mobile Cloud Computing, Services, and Engineering, MobileCloud 2014

Conference

Conference2nd IEEE International Conference on Mobile Cloud Computing, Services, and Engineering, MobileCloud 2014
Country/TerritoryUnited Kingdom
CityOxford
Period7/04/1410/04/14

Fingerprint

Dive into the research topics of 'Semi-automatic 2D to 3D video conversion based on relative velocity Estimation'. Together they form a unique fingerprint.

Cite this