Advances in Multimodal Interfaces - ICMI 2000


Book Description

Multimodal Interfaces represents an emerging interdisciplinary research direction and has become one of the frontiers in Computer Science. Multimodal interfaces aim at efficient, convenient and natural interaction and communication between computers (in their broadest sense) and human users. They will ultimately enable users to interact with computers using their everyday skills. These proceedings include the papers accepted for presentation at the Third International Conference on Multimodal Interfaces (ICMI 2000) held in Beijing, China on 1416 O ctober 2000. The papers were selected from 172 contributions submitted worldwide. Each paper was allocated for review to three members of the Program Committee, which consisted of more than 40 leading researchers in the field. Final decisions of 38 oral papers and 48 poster papers were made based on the reviewers’ comments and the desire for a balance of topics. The decision to have a single track conference led to a competitive selection process and it is very likely that some good submissions are not included in this volume. The papers collected here cover a wide range of topics such as affective and perceptual computing, interfaces for wearable and mobile computing, gestures and sign languages, face and facial expression analysis, multilingual interfaces, virtual and augmented reality, speech and handwriting, multimodal integration and application systems. They represent some of the latest progress in multimodal interfaces research.







Multimodal Interface for Human-machine Communication


Book Description

With the advance of speech, image and video technology, human-computer interaction (HCI) will reach a new phase.In recent years, HCI has been extended to human-machine communication (HMC) and the perceptual user interface (PUI). The final goal in HMC is that the communication between humans and machines is similar to human-to-human communication. Moreover, the machine can support human-to-human communication (e.g. an interface for the disabled). For this reason, various aspects of human communication are to be considered in HMC. The HMC interface, called a multimodal interface, includes different types of input methods, such as natural language, gestures, face and handwriting characters.The nine papers in this book have been selected from the 92 high-quality papers constituting the proceedings of the 2nd International Conference on Multimodal Interface (ICMI '99), which was held in Hong Kong in 1999. The papers cover a wide spectrum of the multimodal interface.




The Handbook of Multimodal-Multisensor Interfaces, Volume 1


Book Description

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smart phones, wearables, in-vehicle and robotic applications, and many other areas that are now highly competitive commercially. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This first volume of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling and interface designs that support user choice, that synergistically combine modalities with sensors, and that blend multimodal input and output. This volume also highlights an in-depth look at the most common multimodal-multisensor combinations—for example, touch and pen input, haptic and non-speech audio output, and speech-centric systems that co-process either gestures, pen input, gaze, or visible lip movements. A common theme throughout these chapters is supporting mobility and individual differences among users. These handbook chapters provide walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.




The Paradigm Shift to Multimodality in Contemporary Computer Interfaces


Book Description

During the last decade, cell phones with multimodal interfaces based on combined new media have become the dominant computer interface worldwide. Multimodal interfaces support mobility and expand the expressive power of human input to computers. They have shifted the fulcrum of human-computer interaction much closer to the human. This book explains the foundation of human-centered multimodal interaction and interface design, based on the cognitive and neurosciences, as well as the major benefits of multimodal interfaces for human cognition and performance. It describes the data-intensive methodologies used to envision, prototype, and evaluate new multimodal interfaces. From a system development viewpoint, this book outlines major approaches for multimodal signal processing, fusion, architectures, and techniques for robustly interpreting users' meaning. Multimodal interfaces have been commercialized extensively for field and mobile applications during the last decade. Research also is growing rapidly in areas like multimodal data analytics, affect recognition, accessible interfaces, embedded and robotic interfaces, machine learning and new hybrid processing approaches, and similar topics. The expansion of multimodal interfaces is part of the long-term evolution of more expressively powerful input to computers, a trend that will substantially improve support for human cognition and performance.




Human Machine Interaction


Book Description

Human Machine Interaction, or more commonly Human Computer Interaction, is the study of interaction between people and computers. It is an interdisciplinary field, connecting computer science with many other disciplines such as psychology, sociology and the arts. The present volume documents the results of the MMI research program on Human Machine Interaction involving 8 projects (selected from a total of 80 proposals) funded by the Hasler Foundation between 2005 and 2008. These projects were also partially funded by the associated universities and other third parties such as the Swiss National Science Foundation. This state-of-the-art survey begins with three chapters giving overviews of the domains of multimodal user interfaces, interactive visualization, and mixed reality. These are followed by eight chapters presenting the results of the projects, grouped according to the three aforementioned themes.




Advances in Multimodal Interfaces - ICMI 2000


Book Description

Multimodal Interfaces represents an emerging interdisciplinary research direction and has become one of the frontiers in Computer Science. Multimodal interfaces aim at efficient, convenient and natural interaction and communication between computers (in their broadest sense) and human users. They will ultimately enable users to interact with computers using their everyday skills. These proceedings include the papers accepted for presentation at the Third International Conference on Multimodal Interfaces (ICMI 2000) held in Beijing, China on 1416 O ctober 2000. The papers were selected from 172 contributions submitted worldwide. Each paper was allocated for review to three members of the Program Committee, which consisted of more than 40 leading researchers in the field. Final decisions of 38 oral papers and 48 poster papers were made based on the reviewers’ comments and the desire for a balance of topics. The decision to have a single track conference led to a competitive selection process and it is very likely that some good submissions are not included in this volume. The papers collected here cover a wide range of topics such as affective and perceptual computing, interfaces for wearable and mobile computing, gestures and sign languages, face and facial expression analysis, multilingual interfaces, virtual and augmented reality, speech and handwriting, multimodal integration and application systems. They represent some of the latest progress in multimodal interfaces research.




Multimodal User Interfaces


Book Description

tionship indicates how multimodal medical image processing can be unified to a large extent, e. g. multi-channel segmentation and image registration, and extend information theoretic registration to other features than image intensities. The framework is not at all restricted to medical images though and this is illustrated by applying it to multimedia sequences as well. In Chapter 4, the main results from the developments in plastic UIs and mul- modal UIs are brought together using a theoretic and conceptual perspective as a unifying approach. It is aimed at defining models useful to support UI plasticity by relying on multimodality, at introducing and discussing basic principles that can drive the development of such UIs, and at describing some techniques as proof-of-concept of the aforementioned models and principles. In Chapter 4, the authors introduce running examples that serve as illustration throughout the d- cussion of the use of multimodality to support plasticity.




Multimodal Human Computer Interaction and Pervasive Services


Book Description

"This book provides concepts, methodologies, and applications used to design and develop multimodal systems"--Provided by publisher.




Innovative and Creative Developments in Multimodal Interaction Systems


Book Description

This book contains the outcome of the 9th IFIP WG 5.5 International Summer Workshop on Multimodal Interfaces, eNTERFACE 2013, held in Lisbon, Portugal, in July/August 2013. The 9 papers included in this book represent the results of a 4-week workshop, where senior and junior researchers worked together on projects tackling new trends in human-machine interaction (HMI). The papers are organized in two topical sections. The first one presents different proposals focused on some fundamental issues regarding multimodal interactions, i.e., telepresence, speech synthesis and interactive modeling. The second is a set of development examples in key areas of HMI applications, i.e., education, entertainment and assistive technologies.