Dynamic Programming and Bayesian Inference


Book Description

Dynamic programming and Bayesian inference have been both intensively and extensively developed during recent years. Because of these developments, interest in dynamic programming and Bayesian inference and their applications has greatly increased at all mathematical levels. The purpose of this book is to provide some applications of Bayesian optimization and dynamic programming.







Robustness


Book Description

The standard theory of decision making under uncertainty advises the decision maker to form a statistical model linking outcomes to decisions and then to choose the optimal distribution of outcomes. This assumes that the decision maker trusts the model completely. But what should a decision maker do if the model cannot be trusted? Lars Hansen and Thomas Sargent, two leading macroeconomists, push the field forward as they set about answering this question. They adapt robust control techniques and apply them to economics. By using this theory to let decision makers acknowledge misspecification in economic modeling, the authors develop applications to a variety of problems in dynamic macroeconomics. Technical, rigorous, and self-contained, this book will be useful for macroeconomists who seek to improve the robustness of decision-making processes.




Algorithms for Reinforcement Learning


Book Description

Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms' merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming. We give a fairly comprehensive catalog of learning problems, describe the core ideas, note a large number of state of the art algorithms, followed by the discussion of their theoretical properties and limitations. Table of Contents: Markov Decision Processes / Value Prediction Problems / Control / For Further Exploration




Decision Making Under Uncertainty


Book Description

An introduction to decision making under uncertainty from a computational perspective, covering both theory and applications ranging from speech recognition to airborne collision avoidance. Many important problems involve decision making under uncertainty—that is, choosing actions based on often imperfect observations, with unknown outcomes. Designers of automated decision support systems must take into account the various sources of uncertainty while balancing the multiple objectives of the system. This book provides an introduction to the challenges of decision making under uncertainty from a computational perspective. It presents both the theory behind decision making models and algorithms and a collection of example applications that range from speech recognition to aircraft collision avoidance. Focusing on two methods for designing decision agents, planning and reinforcement learning, the book covers probabilistic models, introducing Bayesian networks as a graphical model that captures probabilistic relationships between variables; utility theory as a framework for understanding optimal decision making under uncertainty; Markov decision processes as a method for modeling sequential problems; model uncertainty; state uncertainty; and cooperative decision making involving multiple interacting agents. A series of applications shows how the theoretical concepts can be applied to systems for attribute-based person search, speech applications, collision avoidance, and unmanned aircraft persistent surveillance. Decision Making Under Uncertainty unifies research from different communities using consistent notation, and is accessible to students and researchers across engineering disciplines who have some prior exposure to probability theory and calculus. It can be used as a text for advanced undergraduate and graduate students in fields including computer science, aerospace and electrical engineering, and management science. It will also be a valuable professional reference for researchers in a variety of disciplines.




Bayesian Networks


Book Description

Bayesian Networks, the result of the convergence of artificial intelligence with statistics, are growing in popularity. Their versatility and modelling power is now employed across a variety of fields for the purposes of analysis, simulation, prediction and diagnosis. This book provides a general introduction to Bayesian networks, defining and illustrating the basic concepts with pedagogical examples and twenty real-life case studies drawn from a range of fields including medicine, computing, natural sciences and engineering. Designed to help analysts, engineers, scientists and professionals taking part in complex decision processes to successfully implement Bayesian networks, this book equips readers with proven methods to generate, calibrate, evaluate and validate Bayesian networks. The book: Provides the tools to overcome common practical challenges such as the treatment of missing input data, interaction with experts and decision makers, determination of the optimal granularity and size of the model. Highlights the strengths of Bayesian networks whilst also presenting a discussion of their limitations. Compares Bayesian networks with other modelling techniques such as neural networks, fuzzy logic and fault trees. Describes, for ease of comparison, the main features of the major Bayesian network software packages: Netica, Hugin, Elvira and Discoverer, from the point of view of the user. Offers a historical perspective on the subject and analyses future directions for research. Written by leading experts with practical experience of applying Bayesian networks in finance, banking, medicine, robotics, civil engineering, geology, geography, genetics, forensic science, ecology, and industry, the book has much to offer both practitioners and researchers involved in statistical analysis or modelling in any of these fields.




Bayesian Theory


Book Description

This highly acclaimed text, now available in paperback, provides a thorough account of key concepts and theoretical results, with particular emphasis on viewing statistical inference as a special case of decision theory. Information-theoretic concepts play a central role in the development of the theory, which provides, in particular, a detailed discussion of the problem of specification of so-called prior ignorance . The work is written from the authors s committed Bayesian perspective, but an overview of non-Bayesian theories is also provided, and each chapter contains a wide-ranging critical re-examination of controversial issues. The level of mathematics used is such that most material is accessible to readers with knowledge of advanced calculus. In particular, no knowledge of abstract measure theory is assumed, and the emphasis throughout is on statistical concepts rather than rigorous mathematics. The book will be an ideal source for all students and researchers in statistics, mathematics, decision analysis, economic and business studies, and all branches of science and engineering, who wish to further their understanding of Bayesian statistics




Statistical Rethinking


Book Description

Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds readers’ knowledge of and confidence in statistical modeling. Reflecting the need for even minor programming in today’s model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work. The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation. By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling. Web Resource The book is accompanied by an R package (rethinking) that is available on the author’s website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.




Foundations of Probabilistic Programming


Book Description

This book provides an overview of the theoretical underpinnings of modern probabilistic programming and presents applications in e.g., machine learning, security, and approximate computing. Comprehensive survey chapters make the material accessible to graduate students and non-experts. This title is also available as Open Access on Cambridge Core.