The Handbook of Multimodal-Multisensor Interfaces, Volume 1


Book Description

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smart phones, wearables, in-vehicle and robotic applications, and many other areas that are now highly competitive commercially. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This first volume of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling and interface designs that support user choice, that synergistically combine modalities with sensors, and that blend multimodal input and output. This volume also highlights an in-depth look at the most common multimodal-multisensor combinations—for example, touch and pen input, haptic and non-speech audio output, and speech-centric systems that co-process either gestures, pen input, gaze, or visible lip movements. A common theme throughout these chapters is supporting mobility and individual differences among users. These handbook chapters provide walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.




Multimodal Human Computer Interaction and Pervasive Services


Book Description

"This book provides concepts, methodologies, and applications used to design and develop multimodal systems"--Provided by publisher.




Mobile Computing Principles


Book Description

Written to address technical concerns that mobile developers face regardless of the platform (J2ME, WAP, Windows CE, etc.), this 2005 book explores the differences between mobile and stationary applications and the architectural and software development concepts needed to build a mobile application. Using UML as a tool, Reza B'far guides the developer through the development process, showing how to document the design and implementation of the application. He focuses on general concepts, while using platforms as examples or as possible tools. After introducing UML, XML and derivative tools necessary for developing mobile software applications, B'far shows how to build user interfaces for mobile applications. He covers location sensitivity, wireless connectivity, mobile agents, data synchronization, security, and push-based technologies, and finally homes in on the practical issues of mobile application development including the development cycle for mobile applications, testing mobile applications, architectural concerns, and a case study.




Multimodal Interface for Human-machine Communication


Book Description

With the advance of speech, image and video technology, human-computer interaction (HCI) will reach a new phase.In recent years, HCI has been extended to human-machine communication (HMC) and the perceptual user interface (PUI). The final goal in HMC is that the communication between humans and machines is similar to human-to-human communication. Moreover, the machine can support human-to-human communication (e.g. an interface for the disabled). For this reason, various aspects of human communication are to be considered in HMC. The HMC interface, called a multimodal interface, includes different types of input methods, such as natural language, gestures, face and handwriting characters.The nine papers in this book have been selected from the 92 high-quality papers constituting the proceedings of the 2nd International Conference on Multimodal Interface (ICMI '99), which was held in Hong Kong in 1999. The papers cover a wide spectrum of the multimodal interface.




Multimodality in Mobile Computing and Mobile Devices: Methods for Adaptable Usability


Book Description

"This book offers a variety of perspectives on multimodal user interface design, describes a variety of novel multimodal applications and provides several experience reports with experimental and industry-adopted mobile multimodal applications"--Provided by publisher.




Encyclopedia of Multimedia


Book Description

This second edition provides easy access to important concepts, issues and technology trends in the field of multimedia technologies, systems, techniques, and applications. Over 1,100 heavily-illustrated pages — including 80 new entries — present concise overviews of all aspects of software, systems, web tools and hardware that enable video, audio and developing media to be shared and delivered electronically.




Human-Computer Interaction. User Interface Design, Development and Multimodality


Book Description

The two-volume set LNCS 10271 and 10272 constitutes the refereed proceedings of the 19th International Conference on Human-Computer Interaction, HCII 2017, held in Vancouver, BC, Canada, in July 2017. The total of 1228 papers presented at the 15 colocated HCII 2017 conferences was carefully reviewed and selected from 4340 submissions. The papers address the latest research and development efforts and highlight the human aspects of design and use of computing systems. They cover the entire field of Human-Computer Interaction, addressing major advances in knowledge and effective use of computers in a variety of application areas. The papers included in this volume cover the following topics: HCI theory and education; HCI, innovation and technology acceptance; interaction design and evaluation methods; user interface development; methods, tools, and architectures; multimodal interaction; and emotions in HCI.




Multimodal User Interfaces


Book Description

tionship indicates how multimodal medical image processing can be unified to a large extent, e. g. multi-channel segmentation and image registration, and extend information theoretic registration to other features than image intensities. The framework is not at all restricted to medical images though and this is illustrated by applying it to multimedia sequences as well. In Chapter 4, the main results from the developments in plastic UIs and mul- modal UIs are brought together using a theoretic and conceptual perspective as a unifying approach. It is aimed at defining models useful to support UI plasticity by relying on multimodality, at introducing and discussing basic principles that can drive the development of such UIs, and at describing some techniques as proof-of-concept of the aforementioned models and principles. In Chapter 4, the authors introduce running examples that serve as illustration throughout the d- cussion of the use of multimodality to support plasticity.




Human Machine Interaction


Book Description

Human Machine Interaction, or more commonly Human Computer Interaction, is the study of interaction between people and computers. It is an interdisciplinary field, connecting computer science with many other disciplines such as psychology, sociology and the arts. The present volume documents the results of the MMI research program on Human Machine Interaction involving 8 projects (selected from a total of 80 proposals) funded by the Hasler Foundation between 2005 and 2008. These projects were also partially funded by the associated universities and other third parties such as the Swiss National Science Foundation. This state-of-the-art survey begins with three chapters giving overviews of the domains of multimodal user interfaces, interactive visualization, and mixed reality. These are followed by eight chapters presenting the results of the projects, grouped according to the three aforementioned themes.




Designing Across Senses


Book Description

Today we have the ability to connect speech, touch, haptic, and gestural interfaces into products that engage several human senses at once. This practical book explores examples from current designers and devices to describe how these products blend multiple interface modes together into a cohesive user experience. Authors Christine Park and John Alderman explain the basic principles behind multimodal interaction and introduce the tools you need to root your design in the ways our senses shape experience. This book also includes guides on process, design, and deliverables to help your team get started. The book covers several topics within multimodal design, including: New Human Factors: learn how human sensory abilities allow us to interact with technology and the physical world New Technologies: explore some of the technologies that enable multimodal interactions, products, and capabilities Multimodal Products: examine different categories of products and learn how they deliver sensory-rich experiences Multimodal Design: learn processes and methodologies for multimodal product design, development, and release