The Handbook of Multimodal-Multisensor Interfaces, Volume 3


Book Description

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces-user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces. This three-volume handbook is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This third volume focuses on state-of-the-art multimodal language and dialogue processing, including semantic integration of modalities. The development of increasingly expressive embodied agents and robots has become an active test bed for coordinating multimodal dialogue input and output, including processing of language and nonverbal communication. In addition, major application areas are featured for commercializing multimodal-multisensor systems, including automotive, robotic, manufacturing, machine translation, banking, communications, and others. These systems rely heavily on software tools, data resources, and international standards to facilitate their development. For insights into the future, emerging multimodal-multisensor technology trends are highlighted in medicine, robotics, interaction with smart spaces, and similar areas. Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them. The handbook chapters provide a number of walk-through examples of system design and processing, information on practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.




Multimodal User Interfaces


Book Description

tionship indicates how multimodal medical image processing can be unified to a large extent, e. g. multi-channel segmentation and image registration, and extend information theoretic registration to other features than image intensities. The framework is not at all restricted to medical images though and this is illustrated by applying it to multimedia sequences as well. In Chapter 4, the main results from the developments in plastic UIs and mul- modal UIs are brought together using a theoretic and conceptual perspective as a unifying approach. It is aimed at defining models useful to support UI plasticity by relying on multimodality, at introducing and discussing basic principles that can drive the development of such UIs, and at describing some techniques as proof-of-concept of the aforementioned models and principles. In Chapter 4, the authors introduce running examples that serve as illustration throughout the d- cussion of the use of multimodality to support plasticity.




Multimodal Signal Processing


Book Description

A comprehensive synthesis of recent advances in multimodal signal processing applications for human interaction analysis and meeting support technology. With directly applicable methods and metrics along with benchmark results, this guide is ideal for those interested in multimodal signal processing, its component disciplines and its application to human interaction analysis.










Interactive Multimodal Information Management


Book Description

In the past twenty years, computers and networks have gained a prominent role in supporting human communications. This book presents recent research in multimodal information processing, which demonstrates that computers can achieve more than what telephone calls or videoconferencing can do. The book offers a snapshot of current capabilities for the analysis of human communications in several modalities – audio, speech, language, images, video, and documents – and for accessing this information interactively. The book has a clear application goal, which is the capture, automatic analysis, storage, and retrieval of multimodal signals from human interaction in meetings. This goal provides a controlled experimental framework and helps generating shared data, which is required for methods based on machine learning. This goal has shaped the vision of the contributors to the book and of many other researchers cited in it. It has also received significant long-term support through a series of projects, including the Swiss National Center of Competence in Research (NCCR) in Interactive Multimodal Information Management (IM2), to which the contributors to the book have been connected.