Development of a Multimodal Human-computer Interface for the Control of a Mobile Robot


Book Description

The recent advent of consumer grade Brain-Computer Interfaces (BCI) provides a new revolutionary and accessible way to control computers. BCI translate cognitive electroencephalography (EEG) signals into computer or robotic commands using specially built headsets. Capable of enhancing traditional interfaces that require interaction with a keyboard, mouse or touchscreen, BCI systems present tremendous opportunities to benefit various fields. Movement restricted users can especially benefit from these interfaces. In this thesis, we present a new way to interface a consumer-grade BCI solution to a mobile robot. A Red-Green-Blue-Depth (RGBD) camera is used to enhance the navigation of the robot with cognitive thoughts as commands. We introduce an interface presenting 3 different methods of robot-control: 1) a fully manual mode, where a cognitive signal is interpreted as a command, 2) a control-flow manual mode, reducing the likelihood of false-positive commands and 3) an automatic mode assisted by a remote RGBD camera. We study the application of this work by navigating the mobile robot on a planar surface using the different control methods while measuring the accuracy and usability of the system. Finally, we assess the newly designed interface's role in the design of future generation of BCI solutions.




The Paradigm Shift to Multimodality in Contemporary Computer Interfaces


Book Description

During the last decade, cell phones with multimodal interfaces based on combined new media have become the dominant computer interface worldwide. Multimodal interfaces support mobility and expand the expressive power of human input to computers. They have shifted the fulcrum of human-computer interaction much closer to the human. This book explains the foundation of human-centered multimodal interaction and interface design, based on the cognitive and neurosciences, as well as the major benefits of multimodal interfaces for human cognition and performance. It describes the data-intensive methodologies used to envision, prototype, and evaluate new multimodal interfaces. From a system development viewpoint, this book outlines major approaches for multimodal signal processing, fusion, architectures, and techniques for robustly interpreting users' meaning. Multimodal interfaces have been commercialized extensively for field and mobile applications during the last decade. Research also is growing rapidly in areas like multimodal data analytics, affect recognition, accessible interfaces, embedded and robotic interfaces, machine learning and new hybrid processing approaches, and similar topics. The expansion of multimodal interfaces is part of the long-term evolution of more expressively powerful input to computers, a trend that will substantially improve support for human cognition and performance. Table of Contents: Preface: Intended Audience and Teaching with this Book / Acknowledgments / Introduction / Definition and Typre of Multimodal Interface / History of Paradigm Shift from Graphical to Multimodal Interfaces / Aims and Advantages of Multimodal Interfaces / Evolutionary, Neuroscience, and Cognitive Foundations of Multimodal Interfaces / Theoretical Foundations of Multimodal Interfaces / Human-Centered Design of Multimodal Interfaces / Multimodal Signal Processing, Fusion, and Architectures / Multimodal Language, Semantic Processing, and Multimodal Integration / Commercialization of Multimodal Interfaces / Emerging Multimodal Research Areas, and Applications / Beyond Multimodality: Designing More Expressively Powerful Interfaces / Conclusions and Future Directions / Bibliography / Author Biographies







Multimodal User Interfaces


Book Description

tionship indicates how multimodal medical image processing can be unified to a large extent, e. g. multi-channel segmentation and image registration, and extend information theoretic registration to other features than image intensities. The framework is not at all restricted to medical images though and this is illustrated by applying it to multimedia sequences as well. In Chapter 4, the main results from the developments in plastic UIs and mul- modal UIs are brought together using a theoretic and conceptual perspective as a unifying approach. It is aimed at defining models useful to support UI plasticity by relying on multimodality, at introducing and discussing basic principles that can drive the development of such UIs, and at describing some techniques as proof-of-concept of the aforementioned models and principles. In Chapter 4, the authors introduce running examples that serve as illustration throughout the d- cussion of the use of multimodality to support plasticity.




Multimodal Usability


Book Description

This preface tells the story of how Multimodal Usability responds to a special challenge. Chapter 1 describes the goals and structure of this book. The idea of describing how to make multimodal computer systems usable arose in the European Network of Excellence SIMILAR – “Taskforce for cre- ing human-machine interfaces SIMILAR to human-human communication”, 2003– 2007, www. similar. cc. SIMILAR brought together people from multimodal signal processing and usability with the aim of creating enabling technologies for new kinds of multimodal systems and demonstrating results in research prototypes. Most of our colleagues in the network were, in fact, busy extracting features and guring out how to demonstrate progress in working interactive systems, while claiming not to have too much of a notion of usability in system development and evaluation. It was proposed that the authors support the usability of the many multimodal pro- types underway by researching and presenting a methodology for building usable multimodal systems. We accepted the challenge, rst and foremost, no doubt, because the formidable team spirit in SIMILAR could make people accept outrageous things. Second, h- ing worked for nearly two decades on making multimodal systems usable, we were curious – curious at the opportunity to try to understand what happens to traditional usability work, that is, work in human–computer interaction centred around tra- tional graphical user interfaces (GUIs), when systems become as multimodal and as advanced in other ways as those we build in research today.




The Handbook of Multimodal-Multisensor Interfaces, Volume 1


Book Description

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smart phones, wearables, in-vehicle and robotic applications, and many other areas that are now highly competitive commercially. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This first volume of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling and interface designs that support user choice, that synergistically combine modalities with sensors, and that blend multimodal input and output. This volume also highlights an in-depth look at the most common multimodal-multisensor combinations—for example, touch and pen input, haptic and non-speech audio output, and speech-centric systems that co-process either gestures, pen input, gaze, or visible lip movements. A common theme throughout these chapters is supporting mobility and individual differences among users. These handbook chapters provide walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.




Mixed Reality and Human-Robot Interaction


Book Description

MR technologies play an increasing role in different aspects of human-robot interactions. The visual combination of digital contents with real working spaces creates a simulated environment that is set out to enhance these aspects. This book presents and discusses fundamental scientific issues, technical implementations, lab testing, and industrial applications and case studies of Mixed Reality in Human-Robot Interaction. It is a reference book that not only acts as meta-book in the field that defines and frames Mixed Reality use in Human-Robot Interaction, but also addresses up-coming trends and emerging directions of the field. This volume offers a comprehensive reference volume to the state-of-the-art in the area of MR in Human-Robot Interaction, an excellent mix of contributions from leading researcher/experts in multiple disciplines from academia and industry. All authors are experts and/or top researchers in their respective areas and each of the chapters has been rigorously reviewed for intellectual contents by the editorial team to ensure a high quality. This book provides up-to-date insight into the current research topics in this field as well as the latest technological advancements and the best working examples.




Human-in-the-loop Learning and Control for Robot Teleoperation


Book Description

Human-in-the-loop Learning and Control for Robot Teleoperation presents recent, research progress on teleoperation and robots, including human-robot interaction, learning and control for teleoperation with many extensions on intelligent learning techniques. The book integrates cutting-edge research on learning and control algorithms of robot teleoperation, neural motor learning control, wave variable enhancement, EMG-based teleoperation control, and other key aspects related to robot technology, presenting implementation tactics, adequate application examples and illustrative interpretations. Robots have been used in various industrial processes to reduce labor costs and improve work efficiency. However, most robots are only designed to work on repetitive and fixed tasks, leaving a gap with the human desired manufacturing effect. Introduces research progress and technical contributions on teleoperation robots, including intelligent human-robot interactions and learning and control algorithms for teleoperation Presents control strategies and learning algorithms to a teleoperation framework to enhance human-robot shared control, bi-directional perception and intelligence of the teleoperation system Discusses several control and learning methods, describes the working implementation and shows how these methods can be applied to a specific and practical teleoperation system




The Handbook of Multimodal-Multisensor Interfaces, Volume 3


Book Description

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces-user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces. This three-volume handbook is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This third volume focuses on state-of-the-art multimodal language and dialogue processing, including semantic integration of modalities. The development of increasingly expressive embodied agents and robots has become an active test bed for coordinating multimodal dialogue input and output, including processing of language and nonverbal communication. In addition, major application areas are featured for commercializing multimodal-multisensor systems, including automotive, robotic, manufacturing, machine translation, banking, communications, and others. These systems rely heavily on software tools, data resources, and international standards to facilitate their development. For insights into the future, emerging multimodal-multisensor technology trends are highlighted in medicine, robotics, interaction with smart spaces, and similar areas. Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them. The handbook chapters provide a number of walk-through examples of system design and processing, information on practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.




Mobile Robot Systems: Advanced Designing and Development


Book Description

The aim of this book is to encompass progresses of mobile robotics and associated technologies applied for multi-robot systems' design and development. Design of control system is a complicated matter, which needs the application of information technologies to integrate the robots into a sole network. Human-robot interface becomes a challenging task, particularly when we try to employ smart methodologies for brain signal processing. Several advancements in path planning and navigations, inclusive of parallel programming, can be seen and generated. Electrophysiological signals can be utilized to control distinct devices like cars, video games, wheelchairs, etc. Training of the mobile robot operators is an extremely challenging task due to various factors associated with execution of distinct tasks. The book will appeal to a broad range of readers including veteran researchers as well as scientists.