Book Description
For centuries, the search for operating methods that are easy, intuitive and efficient has never stopped, and will most definitely go on. The current technologies allow us to interact with computers without mouse and keyboard with alternatives like gestures (visual inputs) or voice commands (audio inputs). As We manage to make communication between human and computer easier and more like that among human beings, it inevitably comes with ambiguity due to the complexity of input environments. This ambiguity exists in the latest interactive technologies like VR/AR (virtual reality/augmented reality) and drone interactions. VR/AR equipments are nowadays quite accessible for people who want to create interactive immersive environments. However, as conventional input methods like mouse and keyboard are not included in the interaction that comes with these devices, alternatives that are as efficient and comfortable are required. Similar situations happen in drone interactions. Presently, the most common user scenario of drone is shooting fly-by footages for events or sports, in which users often could not afford to spare their hands as they are busy flying the drone. For the devices designed for VR/AR or drone users, gestures could be an ideal interaction candidate for hardware compatibility and resemblance to how people interact with the real world. Presently, there are various gestural sensors available for gestural interfaces, and they come with fascinating capability of tracking user movements. However, there is still much space for improvement in terms of precision. Although they may feel more intuitive than traditional methods like keyboard and mouse, they are on the other hand more ambiguous. Considering that Microsoft Hololenses are still shipped along with clickers, and that Vives work best with controllers, it's obvious that due to the limitations in current sensor technologies and human kinematic sciences, existing gestural interface designs are far from being so effective and pervasive as conventional methods of mouse and keyboard. People nowadays are not yet able to apply gestural interfaces to replace all the interactions. Since the major purpose of gestural human-computer interaction development is to make interactions as intuitive as possible, human factor naturally plays a crucial role in the process. Hence, we propose an ideal human-computer interaction - Mutual Adaptation. The Mutual Adaptation technique comes in two parts: Human Adaptation and Machine Adaptation. Human Adaptation is about creating human-computer interactions to facilitate human to adapt, while Machine Adaptation goes the other way around by demanding machines or device to adapt to human operation. Despite the latter being much more prevailing than the former, we believe that the former comes with great potentials and worth exploring further. In this work, we'd apply these concepts to the design of gestural human-computer interaction, and evaluate the effectiveness of various types of applications. we built our interaction prototypes around an online protein folding game, Foldit. Several user studies had been done to show the effectiveness of this approach.