Download Full Text (2.0 MB)


KinectVision360 project integrates multiple Microsoft Kinect sensors to investigate the capabilities of the low cost human interactive device to apply them to modern problems in comparison to high cost devices. The project includes human face/skeletal tracking and a larger field of vision. We incorporate three Kinects (1st generation) and a custom computer with components necessary to process large amounts of data in real-time. Figure 1 represents an overview of the processes the program follows in order to execute and visualize the data for tracking. The “FaceTracking” class contains algorithms and logic to retrieve skeletal information from infrared depth sensors in the API. In a two-stage process, body position is understood by calculating a depth-map with structured light and then infers the body position using machine learning. Microsoft trained the system with over a million samples using a random decision forest. We enhance the tracking by rejecting poor construction of skeletons or faces. The class also transfers data between the sensors to allow for communication. We are restricted in how much data we can process due to hardware limitations. We compensate for complexity of code and computer performance which we overcome by limiting ourselves to 3 sensors. Overall, the system can analyze realtime data and visualize the data as it is being recorded with multiple integrated sensors.

Publication Date



Electrical and computer engineering, Microsoft Kinect, Face Tracking System, Multi-sensor Implementation, Data Processing


Electrical and Computer Engineering | Engineering

Faculty Advisor/Mentor

Carl Elks

Faculty Advisor/Mentor

Yuichi Motai

VCU Capstone Design Expo Posters


© The Author(s)

Date of Submission

August 2016

KinectVision360: A Real-time Human Tracking System