Real-time 3D human objects rendering based on multiple camera details

W. G.C.W. Kumara, Shwu Huey Yen, Hui Huang Hsu, Timothy K. Shih, Wei Chun Chang, Enkhtogtokh Togootogtokh

Research output: Contribution to journalArticlepeer-review

9 Scopus citations


3D model construction techniques using RGB-D information have been gaining a great attention of the researchers around the world in recent decades. The RGB-D sensor, Microsoft Kinect is widely used in many research fields, such as in computer vision, computer graphics, and human computer interaction, due to its capacity of providing color and depth information. This paper presents our research finding on calibrating information from several Kinects in order to construct a 3D model of a human object and to render texture captured from RGB camera. We used multiple Kinect sensors, which are interconnected in a network. High bit rate streams captured at each Kinect are first sent to a centralized PC for the processing. This even can be extended to a remote PC in the Internet. Main contributions of this work include calibration of the multiple Kinects, properly aligning point clouds generated from multiple Kinects, and generation of the 3D shape of the human objects. Experimental results demonstrate that the proposed method provides a better 3D model of the human object being captured.

Original languageEnglish
Pages (from-to)11687-11713
Number of pages27
JournalMultimedia Tools and Applications
Issue number9
StatePublished - 1 May 2017


  • 3D model
  • Kinect
  • Point cloud
  • Registration
  • Virtual reality


Dive into the research topics of 'Real-time 3D human objects rendering based on multiple camera details'. Together they form a unique fingerprint.

Cite this