Development of a Multimodal Human-computer Interface for the Control of a Mobile Robot


Book Description

The recent advent of consumer grade Brain-Computer Interfaces (BCI) provides a new revolutionary and accessible way to control computers. BCI translate cognitive electroencephalography (EEG) signals into computer or robotic commands using specially built headsets. Capable of enhancing traditional interfaces that require interaction with a keyboard, mouse or touchscreen, BCI systems present tremendous opportunities to benefit various fields. Movement restricted users can especially benefit from these interfaces. In this thesis, we present a new way to interface a consumer-grade BCI solution to a mobile robot. A Red-Green-Blue-Depth (RGBD) camera is used to enhance the navigation of the robot with cognitive thoughts as commands. We introduce an interface presenting 3 different methods of robot-control: 1) a fully manual mode, where a cognitive signal is interpreted as a command, 2) a control-flow manual mode, reducing the likelihood of false-positive commands and 3) an automatic mode assisted by a remote RGBD camera. We study the application of this work by navigating the mobile robot on a planar surface using the different control methods while measuring the accuracy and usability of the system. Finally, we assess the newly designed interface's role in the design of future generation of BCI solutions.




The Paradigm Shift to Multimodality in Contemporary Computer Interfaces


Book Description

During the last decade, cell phones with multimodal interfaces based on combined new media have become the dominant computer interface worldwide. Multimodal interfaces support mobility and expand the expressive power of human input to computers. They have shifted the fulcrum of human-computer interaction much closer to the human. This book explains the foundation of human-centered multimodal interaction and interface design, based on the cognitive and neurosciences, as well as the major benefits of multimodal interfaces for human cognition and performance. It describes the data-intensive methodologies used to envision, prototype, and evaluate new multimodal interfaces. From a system development viewpoint, this book outlines major approaches for multimodal signal processing, fusion, architectures, and techniques for robustly interpreting users' meaning. Multimodal interfaces have been commercialized extensively for field and mobile applications during the last decade. Research also is growing rapidly in areas like multimodal data analytics, affect recognition, accessible interfaces, embedded and robotic interfaces, machine learning and new hybrid processing approaches, and similar topics. The expansion of multimodal interfaces is part of the long-term evolution of more expressively powerful input to computers, a trend that will substantially improve support for human cognition and performance. Table of Contents: Preface: Intended Audience and Teaching with this Book / Acknowledgments / Introduction / Definition and Typre of Multimodal Interface / History of Paradigm Shift from Graphical to Multimodal Interfaces / Aims and Advantages of Multimodal Interfaces / Evolutionary, Neuroscience, and Cognitive Foundations of Multimodal Interfaces / Theoretical Foundations of Multimodal Interfaces / Human-Centered Design of Multimodal Interfaces / Multimodal Signal Processing, Fusion, and Architectures / Multimodal Language, Semantic Processing, and Multimodal Integration / Commercialization of Multimodal Interfaces / Emerging Multimodal Research Areas, and Applications / Beyond Multimodality: Designing More Expressively Powerful Interfaces / Conclusions and Future Directions / Bibliography / Author Biographies




Human-Robot Interaction Strategies for Walker-Assisted Locomotion


Book Description

This book presents the development of a new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation. The aim is to achieve a closer interaction between the robotic device and the individual, empowering the rehabilitation potential of such devices in clinical applications. A new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation is presented. Trends and opportunities for future advances in the field of assistive locomotion via the development of hybrid solutions based on the combination of smart walkers and biomechatronic exoskeletons are also discussed.







Handbook of Research on User Interface Design and Evaluation for Mobile Technology


Book Description

"This book compiles authoritative research from scholars worldwide, covering the issues surrounding the influx of information technology to the office environment, from choice and effective use of technologies to necessary participants in the virtual workplace"--Provided by publisher.




The Handbook of Multimodal-Multisensor Interfaces, Volume 1


Book Description

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smart phones, wearables, in-vehicle and robotic applications, and many other areas that are now highly competitive commercially. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This first volume of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling and interface designs that support user choice, that synergistically combine modalities with sensors, and that blend multimodal input and output. This volume also highlights an in-depth look at the most common multimodal-multisensor combinations—for example, touch and pen input, haptic and non-speech audio output, and speech-centric systems that co-process either gestures, pen input, gaze, or visible lip movements. A common theme throughout these chapters is supporting mobility and individual differences among users. These handbook chapters provide walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.




The Handbook of Multimodal-Multisensor Interfaces, Volume 3


Book Description

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces-user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces. This three-volume handbook is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This third volume focuses on state-of-the-art multimodal language and dialogue processing, including semantic integration of modalities. The development of increasingly expressive embodied agents and robots has become an active test bed for coordinating multimodal dialogue input and output, including processing of language and nonverbal communication. In addition, major application areas are featured for commercializing multimodal-multisensor systems, including automotive, robotic, manufacturing, machine translation, banking, communications, and others. These systems rely heavily on software tools, data resources, and international standards to facilitate their development. For insights into the future, emerging multimodal-multisensor technology trends are highlighted in medicine, robotics, interaction with smart spaces, and similar areas. Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them. The handbook chapters provide a number of walk-through examples of system design and processing, information on practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.




Introduction to EEG- and Speech-Based Emotion Recognition


Book Description

Introduction to EEG- and Speech-Based Emotion Recognition Methods examines the background, methods, and utility of using electroencephalograms (EEGs) to detect and recognize different emotions. By incorporating these methods in brain-computer interface (BCI), we can achieve more natural, efficient communication between humans and computers. This book discusses how emotional states can be recognized in EEG images, and how this is useful for BCI applications. EEG and speech processing methods are explored, as are the technological basics of how to operate and record EEGs. Finally, the authors include information on EEG-based emotion recognition, classification, and a proposed EEG/speech fusion method for how to most accurately detect emotional states in EEG recordings. Provides detailed insight on the science of emotion and the brain signals underlying this phenomenon Examines emotions as a multimodal entity, utilizing a bimodal emotion recognition system of EEG and speech data Details the implementation of techniques used for acquiring as well as analyzing EEG and speech signals for emotion recognition




Multimodal Interfaces


Book Description

The growing emphasis on multimodal interface design is fundamentally inspired by the aim to support natural, easy to learn and use, flexible, efficient, and powerfully expressive means of human-computer interaction. Most of the articles in this special issue present work in support of challenging applications such as algebra instruction, data summaries, and interaction with complex spatial displays. A collection of emerging research ideas on next-generation multimodal interfaces, it also addresses multimodal interface design for portable devices to be used in natural field settings. Additionally, it describes implemented systems that make computing accessible to the visually impaired.




Multimodal Usability


Book Description

This preface tells the story of how Multimodal Usability responds to a special challenge. Chapter 1 describes the goals and structure of this book. The idea of describing how to make multimodal computer systems usable arose in the European Network of Excellence SIMILAR – “Taskforce for cre- ing human-machine interfaces SIMILAR to human-human communication”, 2003– 2007, www. similar. cc. SIMILAR brought together people from multimodal signal processing and usability with the aim of creating enabling technologies for new kinds of multimodal systems and demonstrating results in research prototypes. Most of our colleagues in the network were, in fact, busy extracting features and guring out how to demonstrate progress in working interactive systems, while claiming not to have too much of a notion of usability in system development and evaluation. It was proposed that the authors support the usability of the many multimodal pro- types underway by researching and presenting a methodology for building usable multimodal systems. We accepted the challenge, rst and foremost, no doubt, because the formidable team spirit in SIMILAR could make people accept outrageous things. Second, h- ing worked for nearly two decades on making multimodal systems usable, we were curious – curious at the opportunity to try to understand what happens to traditional usability work, that is, work in human–computer interaction centred around tra- tional graphical user interfaces (GUIs), when systems become as multimodal and as advanced in other ways as those we build in research today.