Intent Recognition for Human-Machine Interactions


Book Description

Natural interaction is one of the hottest research issues in human-computer interaction. At present, there is an urgent need for intelligent devices (service robots, virtual humans, etc.) to be able to understand intentions in an interactive dialogue. Focusing on human-computer understanding based on deep learning methods, the book systematically introduces readers to intention recognition, unknown intention detection, and new intention discovery in human-computer dialogue. This book is the first to present interactive dialogue intention analysis in the context of natural interaction. In addition to helping readers master the key technologies and concepts of human-machine dialogue intention analysis and catch up on the latest advances, it includes valuable references for further research.




Plan, Activity, and Intent Recognition


Book Description

Plan recognition, activity recognition, and intent recognition together combine and unify techniques from user modeling, machine vision, intelligent user interfaces, human/computer interaction, autonomous and multi-agent systems, natural language understanding, and machine learning. Plan, Activity, and Intent Recognition explains the crucial role of these techniques in a wide variety of applications including: personal agent assistants computer and network security opponent modeling in games and simulation systems coordination in robots and software agents web e-commerce and collaborative filtering dialog modeling video surveillance smart homes In this book, follow the history of this research area and witness exciting new developments in the field made possible by improved sensors, increased computational power, and new application areas. Combines basic theory on algorithms for plan/activity recognition along with results from recent workshops and seminars Explains how to interpret and recognize plans and activities from sensor data Provides valuable background knowledge and assembles key concepts into one guide for researchers or students studying these disciplines




Virtual, Augmented and Mixed Reality: Designing and Developing Augmented and Virtual Environments


Book Description

The two-volume set LNCS 8525-8526 constitutes the refereed proceedings of the 6th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2014, held as part of the 16th International Conference on Human-Computer Interaction, HCI 2014, in Heraklion, Crete, Greece, in June 2014, jointly with 13 other thematically similar conferences. The total of 1476 papers and 220 posters presented at the HCII 2014 conferences were carefully reviewed and selected from 4766 submissions. These papers address the latest research and development efforts and highlight the human aspects of design and use of computing systems. The papers thoroughly cover the entire field of human-computer interaction, addressing major advances in knowledge and effective use of computers in a variety of application areas. The total of 82 contributions included in the VAMR proceedings were carefully reviewed and selected for inclusion in this two-volume set. The 39 papers included in this volume are organized in the following topical sections: interaction devices, displays and techniques in VAMR; designing virtual and augmented environments; avatars and virtual characters; developing virtual and augmented environments.




Spoken Language Understanding


Book Description

Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, using differing tasks and approaches to better understand and utilize such communications. This book covers the state-of-the-art approaches for the most popular SLU tasks with chapters written by well-known researchers in the respective fields. Key features include: Presents a fully integrated view of the two distinct disciplines of speech processing and language processing for SLU tasks. Defines what is possible today for SLU as an enabling technology for enterprise (e.g., customer care centers or company meetings), and consumer (e.g., entertainment, mobile, car, robot, or smart environments) applications and outlines the key research areas. Provides a unique source of distilled information on methods for computer modeling of semantic information in human/machine and human/human conversations. This book can be successfully used for graduate courses in electronics engineering, computer science or computational linguistics. Moreover, technologists interested in processing spoken communications will find it a useful source of collated information of the topic drawn from the two distinct disciplines of speech processing and language processing under the new area of SLU.







Autonomous Robot Vehicles


Book Description

Autonomous robot vehicles are vehicles capable of intelligent motion and action without requiring either a guide or teleoperator control. The recent surge of interest in this subject will grow even grow further as their potential applications increase. Autonomous vehicles are currently being studied for use as reconnaissance/exploratory vehicles for planetary exploration, undersea, land and air environments, remote repair and maintenance, material handling systems for offices and factories, and even intelligent wheelchairs for the disabled. This reference is the first to deal directly with the unique and fundamental problems and recent progress associated with autonomous vehicles. The editors have assembled and combined significant material from a multitude of sources, and, in effect, now conviniently provide a coherent organization to a previously scattered and ill-defined field.




Human-robot Interaction


Book Description

Presents a unified treatment of HRI-related issues, identifies key themes, and discusses challenge problems that are likely to shape the field in the near future. The survey includes research results from a cross section of the universities, government efforts, industry labs, and countries that contribute to HRI.




Advanced Driver Intention Inference


Book Description

Advanced Driver Intention Inference: Theory and Design describes one of the most important function for future ADAS, namely, the driver intention inference. The book contains the state-of-art knowledge on the construction of driver intention inference system, providing a better understanding on how the human driver intention mechanism will contribute to a more naturalistic on-board decision system for automated vehicles. Features examples of using machine learning/deep learning to build industry products Depicts future trends for driver behavior detection and driver intention inference Discuss traffic context perception techniques that predict driver intentions such as Lidar and GPS




Eye Gaze-based Approaches to Recognize Human Intent for Shared Autonomy Control of Robot Manipulators


Book Description

Robots capable of robust, real-time recognition of human intent during manipulationtasks could be used to enhance human-robot collaboration for innumerable applications. Eye gaze-based control interfaces offer a non-invasive way to infer intent and reduce the cognitive burden on operators of complex robots. Eye gaze is traditionally used for "gaze triggering" (GT) in which staring at an object, or sequence of objects, triggers pre-programmed robotic movements. Our long-term objective is to leverage eye gaze as an intuitive way to infer human intent, advance action recognition for shared autonomy control, and enable seamless human-robot collaboration not yet possible with state-of-the-art gaze-based methods. In Study #1, we identified features from 3D gaze behavior for use by machine learning classifiers of action recognition. We investigated gaze behavior and gaze-object interactions as participants performed the bimanual activity of preparing a powdered drink. We generated 3D gaze saliency maps and used characteristic gaze object sequences to demonstrate an action recognition algorithm. In Study #2, we introduced a classifier for recognizing action primitives, which we defined as triplets having a verb, "target object," and "hand object." Using novel 3D gaze-related features, a recurrent neural network was trained to recognize a verb and target object. The gaze object angle and its rate of change enabled accurate recognition and a reduction in the observational latency of the classifier. Using a non-specific approach to indexing objects, we demonstrated modest generalizability of the classifier across activities. In Study #3, we introduced a neural network-based "action prediction" (AP) mode into a shared autonomy framework capable of 3D gaze reconstruction, real-time intent recognition, object localization, obstacle avoidance, and dynamic trajectory planning. Upon extracting gaze-related features, the AP model recognized, and often predicted, the operator's intended action primitives. The AP control mode often preferred over a state-of-the-art GT mode, enabled more seamless human-robot collaboration. In summary, we developed machine learning-based action recognition methods using novel 3D gaze-related features to enhance the shared autonomy control of robot manipulators. Our methods can serve as a foundation for further enhancement with complementary sensory feedback such as computer vision and tactile sensing.




Research Methods in Human-Computer Interaction


Book Description

Research Methods in Human-Computer Interaction is a comprehensive guide to performing research and is essential reading for both quantitative and qualitative methods. Since the first edition was published in 2009, the book has been adopted for use at leading universities around the world, including Harvard University, Carnegie-Mellon University, the University of Washington, the University of Toronto, HiOA (Norway), KTH (Sweden), Tel Aviv University (Israel), and many others. Chapters cover a broad range of topics relevant to the collection and analysis of HCI data, going beyond experimental design and surveys, to cover ethnography, diaries, physiological measurements, case studies, crowdsourcing, and other essential elements in the well-informed HCI researcher's toolkit. Continual technological evolution has led to an explosion of new techniques and a need for this updated 2nd edition, to reflect the most recent research in the field and newer trends in research methodology. This Research Methods in HCI revision contains updates throughout, including more detail on statistical tests, coding qualitative data, and data collection via mobile devices and sensors. Other new material covers performing research with children, older adults, and people with cognitive impairments. Comprehensive and updated guide to the latest research methodologies and approaches, and now available in EPUB3 format (choose any of the ePub or Mobi formats after purchase of the eBook) Expanded discussions of online datasets, crowdsourcing, statistical tests, coding qualitative data, laws and regulations relating to the use of human participants, and data collection via mobile devices and sensors New material on performing research with children, older adults, and people with cognitive impairments, two new case studies from Google and Yahoo!, and techniques for expanding the influence of your research to reach non-researcher audiences, including software developers and policymakers