Book Description
The goal of this work has been to tackle the problem of gestural human-computer interfaces in its most natural form, i.e. without markers or invasive devices. In that sense a complete system is proposed in order to classify and track in real-time a sufficient number of human features that allow novel forms of gestural man-machine interaction. The algorithm is basically composed of an intra-image phase and an inter-image phase. The first one takes advantage of several mathematical morphology tools in order to analyze the user silhouette and robustly extract head, hands and feet. The second phase works in an inter-image Bayesian framework in order to achieve the classification and tracking of the previously extracted features. Due to its low computational complexity, the system can run at real-time paces on standard Personal Computers, with an average error rate range between 2% and 7% in realistic situations, depending on the context and segmentation quality.