< Back to list

3-D reconstruction using the Kinect sensor and its application to a visualization system

conferencePaper

DOI:10.1109/ICSMC.2012.6378311
Authors: Lim Yong-Wan / Lee Hyuk-Zae / Yang Na-Eun / Park Rae-Hong

Extracted Abstract:

—These days, a large number of human-computer interactive systems have been developed. This paper proposes a visualization system of reconstructed three-dimensional (3-D) scene, which is based on user-interactive control using the Kinect sensor and a webcam. First, we acquire depth and color images from a number of viewpoints, from which 3-D reconstruction of a scene is performed. We pre-process the acquired data to improve 3-D reconstruction result. Then, we reconstruct the 3-D scene for the visualization system that uses face detection and tracking with a webcam. Experimental results show that our system gives a visualized 3-D scene reconstruction simply using the Kinect sensor and a webcam. It can be applied to a human-interaction display system at museum, amusement park, and so on. Keywords—3-D reconstruction; Kinect sensor; visualization system; face detection I.

Level 1: Include/Exclude

  • Papers must discuss situated information visualization* (by Willet et al.) in the application domain of CH.
    *A situated data representation is a data representation whose physical presentation is located close to the data’s physical referent(s).
    *A situated visualization is a situated data representation for which the presentation is purely visual – and is typically displayed on a screen.
  • Representation must include abstract data (e.g., metadata).
  • Papers focused solely on digital reconstruction without information visualization aspects are excluded.
  • Posters and workshop papers are excluded to focus on mature research contributions.
Show all meta-data