Introduction to Symbolic Plan and Goal Recognition


Book Description

Plan recognition, activity recognition, and goal recognition all involve making inferences about other actors based on observations of their interactions with the environment and other agents. This synergistic area of research combines, unites, and makes use of techniques and research from a wide range of areas including user modeling, machine vision, automated planning, intelligent user interfaces, human-computer interaction, autonomous and multi-agent systems, natural language understanding, and machine learning. It plays a crucial role in a wide variety of applications including assistive technology, software assistants, computer and network security, human-robot collaboration, natural language processing, video games, and many more. This wide range of applications and disciplines has produced a wealth of ideas, models, tools, and results in the recognition literature. However, it has also contributed to fragmentation in the field, with researchers publishing relevant results in a wide spectrum of journals and conferences. This book seeks to address this fragmentation by providing a high-level introduction and historical overview of the plan and goal recognition literature. It provides a description of the core elements that comprise these recognition problems and practical advice for modeling them. In particular, we define and distinguish the different recognition tasks. We formalize the major approaches to modeling these problems using a single motivating example. Finally, we describe a number of state-of-the-art systems and their extensions, future challenges, and some potential applications.




Explainable Human-AI Interaction


Book Description

From its inception, artificial intelligence (AI) has had a rather ambivalent relationship with humans—swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans. One critical requirement for such synergistic human‒AI interaction is that the AI systems' behavior be explainable to the humans in the loop. To do this effectively, AI agents need to go beyond planning with their own models of the world, and take into account the mental model of the human in the loop. At a minimum, AI agents need approximations of the human's task and goal models, as well as the human's model of the AI agent's task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of the humans in the loop, and the latter allow it to act in ways that are interpretable to humans (by conforming to their mental models of it), and be ready to provide customized explanations when needed. The authors draw from several years of research in their lab to discuss how an AI agent can use these mental models to either conform to human expectations or change those expectations through explanatory communication. While the focus of the book is on cooperative scenarios, it also covers how the same mental models can be used for obfuscation and deception. The book also describes several real-world application systems for collaborative decision-making that are based on the framework and techniques developed here. Although primarily driven by the authors' own research in these areas, every chapter will provide ample connections to relevant research from the wider literature. The technical topics covered in the book are self-contained and are accessible to readers with a basic background in AI.




Transfer Learning for Multiagent Reinforcement Learning Systems


Book Description

Learning to solve sequential decision-making tasks is difficult. Humans take years exploring the environment essentially in a random way until they are able to reason, solve difficult tasks, and collaborate with other humans towards a common goal. Artificial Intelligent agents are like humans in this aspect. Reinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. Unfortunately, the learning process has a high sample complexity to infer an effective actuation policy, especially when multiple agents are simultaneously actuating in the environment. However, previous knowledge can be leveraged to accelerate learning and enable solving harder tasks. In the same way humans build skills and reuse them by relating different tasks, RL agents might reuse knowledge from previously solved tasks and from the exchange of knowledge with other agents in the environment. In fact, virtually all of the most challenging tasks currently solved by RL rely on embedded knowledge reuse techniques, such as Imitation Learning, Learning from Demonstration, and Curriculum Learning. This book surveys the literature on knowledge reuse in multiagent RL. The authors define a unifying taxonomy of state-of-the-art solutions for reusing knowledge, providing a comprehensive discussion of recent progress in the area. In this book, readers will find a comprehensive discussion of the many ways in which knowledge can be reused in multiagent sequential decision-making tasks, as well as in which scenarios each of the approaches is more efficient. The authors also provide their view of the current low-hanging fruit developments of the area, as well as the still-open big questions that could result in breakthrough developments. Finally, the book provides resources to researchers who intend to join this area or leverage those techniques, including a list of conferences, journals, and implementation tools. This book will be useful for a wide audience; and will hopefully promote new dialogues across communities and novel developments in the area.




Network Embedding


Book Description

heterogeneous graphs. Further, the book introduces different applications of NE such as recommendation and information diffusion prediction. Finally, the book concludes the methods and applications and looks forward to the future directions.




Positive Unlabeled Learning


Book Description

Machine learning and artificial intelligence (AI) are powerful tools that create predictive models, extract information, and help make complex decisions. They do this by examining an enormous quantity of labeled training data to find patterns too complex for human observation. However, in many real-world applications, well-labeled data can be difficult, expensive, or even impossible to obtain. In some cases, such as when identifying rare objects like new archeological sites or secret enemy military facilities in satellite images, acquiring labels could require months of trained human observers at incredible expense. Other times, as when attempting to predict disease infection during a pandemic such as COVID-19, reliable true labels may be nearly impossible to obtain early on due to lack of testing equipment or other factors. In that scenario, identifying even a small amount of truly negative data may be impossible due to the high false negative rate of available tests. In such problems, it is possible to label a small subset of data as belonging to the class of interest though it is impractical to manually label all data not of interest. We are left with a small set of positive labeled data and a large set of unknown and unlabeled data. Readers will explore this Positive and Unlabeled learning (PU learning) problem in depth. The book rigorously defines the PU learning problem, discusses several common assumptions that are frequently made about the problem and their implications, and considers how to evaluate solutions for this problem before describing several of the most popular algorithms to solve this problem. It explores several uses for PU learning including applications in biological/medical, business, security, and signal processing. This book also provides high-level summaries of several related learning problems such as one-class classification, anomaly detection, and noisy learning and their relation to PU learning.




Applying Reinforcement Learning on Real-World Data with Practical Examples in Python


Book Description

Reinforcement learning is a powerful tool in artificial intelligence in which virtual or physical agents learn to optimize their decision making to achieve long-term goals. In some cases, this machine learning approach can save programmers time, outperform existing controllers, reach super-human performance, and continually adapt to changing conditions. This book argues that these successes show reinforcement learning can be adopted successfully in many different situations, including robot control, stock trading, supply chain optimization, and plant control. However, reinforcement learning has traditionally been limited to applications in virtual environments or simulations in which the setup is already provided. Furthermore, experimentation may be completed for an almost limitless number of attempts risk-free. In many real-life tasks, applying reinforcement learning is not as simple as (1) data is not in the correct form for reinforcement learning, (2) data is scarce, and (3) automation has limitations in the real-world. Therefore, this book is written to help academics, domain specialists, and data enthusiast alike to understand the basic principles of applying reinforcement learning to real-world problems. This is achieved by focusing on the process of taking practical examples and modeling standard data into the correct form required to then apply basic agents. To further assist with readers gaining a deep and grounded understanding of the approaches, the book shows hand-calculated examples in full and then how this can be achieved in a more automated manner with code. For decision makers who are interested in reinforcement learning as a solution but are not technically proficient we include simple, non-technical examples in the introduction and case studies section. These provide context of what reinforcement learning offer but also the challenges and risks associated with applying it in practice. Specifically, the book illustrates the differences between reinforcement learning and other machine learning approaches as well as how well-known companies have found success using the approach to their problems.




Plan, Activity, and Intent Recognition


Book Description

Plan recognition, activity recognition, and intent recognition together combine and unify techniques from user modeling, machine vision, intelligent user interfaces, human/computer interaction, autonomous and multi-agent systems, natural language understanding, and machine learning. Plan, Activity, and Intent Recognition explains the crucial role of these techniques in a wide variety of applications including: - personal agent assistants - computer and network security - opponent modeling in games and simulation systems - coordination in robots and software agents - web e-commerce and collaborative filtering - dialog modeling - video surveillance - smart homes In this book, follow the history of this research area and witness exciting new developments in the field made possible by improved sensors, increased computational power, and new application areas. - Combines basic theory on algorithms for plan/activity recognition along with results from recent workshops and seminars - Explains how to interpret and recognize plans and activities from sensor data - Provides valuable background knowledge and assembles key concepts into one guide for researchers or students studying these disciplines







Reinforcement Learning, second edition


Book Description

The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence. Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics. Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.




A Concise Introduction to Models and Methods for Automated Planning


Book Description

Planning is the model-based approach to autonomous behavior where the agent behavior is derived automatically from a model of the actions, sensors, and goals. The main challenges in planning are computational as all models, whether featuring uncertainty and feedback or not, are intractable in the worst case when represented in compact form. In this book, we look at a variety of models used in AI planning, and at the methods that have been developed for solving them. The goal is to provide a modern and coherent view of planning that is precise, concise, and mostly self-contained, without being shallow. For this, we make no attempt at covering the whole variety of planning approaches, ideas, and applications, and focus on the essentials. The target audience of the book are students and researchers interested in autonomous behavior and planning from an AI, engineering, or cognitive science perspective. Table of Contents: Preface / Planning and Autonomous Behavior / Classical Planning: Full Information and Deterministic Actions / Classical Planning: Variations and Extensions / Beyond Classical Planning: Transformations / Planning with Sensing: Logical Models / MDP Planning: Stochastic Actions and Full Feedback / POMDP Planning: Stochastic Actions and Partial Feedback / Discussion / Bibliography / Author's Biography