RGB-D cameras, which capture both RGB images and per-pixel depth information, recently became a popular indoor mapping tool in the field of computer vision. One of the mainstream solutions for indoor mapping and modeling is to create 3D point cloud from multiple images. However, the major drawback of image-based approaches is the lack of points extracted in featureless areas. The integration of RGB-D based sensors and cameras may fill up these voids in featureless areas and create a uniformly distributed point cloud of indoor environments. In this research, a hardware consisting of Kinects and digital single-lens reflex (DSLR) cameras is assembled and the data processing procedure for integrating these two kinds of devices to generate 3D point cloud is established. There is interference between Kinects; hence the field of view of the Kinects cannot overlap with one another. Thus, DSLRs are used to bridge the Kinects and provide a more accurate ray intersection condition, which takes advantage of the higher resolution and image quality of the DSLR cameras. Bundle adjustment is used to resolve the exterior orientation (EO) of all RGB images acquired by Kinect and DSLR. The EO of Kinect at each frame is used as an initial value to combine these point clouds at each frame into the same coordinate system. The result shows that the design of the hardware and the data processing procedure can generate dense and fully colored point clouds of indoor environments even in featureless areas.
|State||Published - 2014|
|Event||35th Asian Conference on Remote Sensing 2014: Sensing for Reintegration of Societies, ACRS 2014 - Nay Pyi Taw, Myanmar|
Duration: 27 Oct 2014 → 31 Oct 2014
|Conference||35th Asian Conference on Remote Sensing 2014: Sensing for Reintegration of Societies, ACRS 2014|
|City||Nay Pyi Taw|
|Period||27/10/14 → 31/10/14|
- SFM reconstruction