Selected Topics on Continuous-time Controlled Markov Chains and Markov Games


Book Description

This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas.An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown.This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book.




Optimization, Control, and Applications of Stochastic Systems


Book Description

This volume provides a general overview of discrete- and continuous-time Markov control processes and stochastic games, along with a look at the range of applications of stochastic control and some of its recent theoretical developments. These topics include various aspects of dynamic programming, approximation algorithms, and infinite-dimensional linear programming. In all, the work comprises 18 carefully selected papers written by experts in their respective fields. Optimization, Control, and Applications of Stochastic Systems will be a valuable resource for all practitioners, researchers, and professionals in applied mathematics and operations research who work in the areas of stochastic control, mathematical finance, queueing theory, and inventory systems. It may also serve as a supplemental text for graduate courses in optimal control and dynamic games.




An Introduction to Optimal Control Theory


Book Description

This book introduces optimal control problems for large families of deterministic and stochastic systems with discrete or continuous time parameter. These families include most of the systems studied in many disciplines, including Economics, Engineering, Operations Research, and Management Science, among many others. The main objective is to give a concise, systematic, and reasonably self contained presentation of some key topics in optimal control theory. To this end, most of the analyses are based on the dynamic programming (DP) technique. This technique is applicable to almost all control problems that appear in theory and applications. They include, for instance, finite and infinite horizon control problems in which the underlying dynamic system follows either a deterministic or stochastic difference or differential equation. In the infinite horizon case, it also uses DP to study undiscounted problems, such as the ergodic or long-run average cost. After a general introduction to control problems, the book covers the topic dividing into four parts with different dynamical systems: control of discrete-time deterministic systems, discrete-time stochastic systems, ordinary differential equations, and finally a general continuous-time MCP with applications for stochastic differential equations. The first and second part should be accessible to undergraduate students with some knowledge of elementary calculus, linear algebra, and some concepts from probability theory (random variables, expectations, and so forth). Whereas the third and fourth part would be appropriate for advanced undergraduates or graduate students who have a working knowledge of mathematical analysis (derivatives, integrals, ...) and stochastic processes.




Continuous-Time Markov Decision Processes


Book Description

This book offers a systematic and rigorous treatment of continuous-time Markov decision processes, covering both theory and possible applications to queueing systems, epidemiology, finance, and other fields. Unlike most books on the subject, much attention is paid to problems with functional constraints and the realizability of strategies. Three major methods of investigations are presented, based on dynamic programming, linear programming, and reduction to discrete-time problems. Although the main focus is on models with total (discounted or undiscounted) cost criteria, models with average cost criteria and with impulsive controls are also discussed in depth. The book is self-contained. A separate chapter is devoted to Markov pure jump processes and the appendices collect the requisite background on real analysis and applied probability. All the statements in the main text are proved in detail. Researchers and graduate students in applied probability, operational research, statistics and engineering will find this monograph interesting, useful and valuable.




Controlled Markov Processes and Viscosity Solutions


Book Description

This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games.




Markov Processes for Stochastic Modeling


Book Description

Markov processes are processes that have limited memory. In particular, their dependence on the past is only through the previous state. They are used to model the behavior of many systems including communications systems, transportation networks, image segmentation and analysis, biological systems and DNA sequence analysis, random atomic motion and diffusion in physics, social mobility, population studies, epidemiology, animal and insect migration, queueing systems, resource management, dams, financial engineering, actuarial science, and decision systems. Covering a wide range of areas of application of Markov processes, this second edition is revised to highlight the most important aspects as well as the most recent trends and applications of Markov processes. The author spent over 16 years in the industry before returning to academia, and he has applied many of the principles covered in this book in multiple research projects. Therefore, this is an applications-oriented book that also includes enough theory to provide a solid ground in the subject for the reader. - Presents both the theory and applications of the different aspects of Markov processes - Includes numerous solved examples as well as detailed diagrams that make it easier to understand the principle being presented - Discusses different applications of hidden Markov models, such as DNA sequence analysis and speech analysis.







Continuous-Time Markov Decision Processes


Book Description

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.







Essentials of Stochastic Processes


Book Description

Building upon the previous editions, this textbook is a first course in stochastic processes taken by undergraduate and graduate students (MS and PhD students from math, statistics, economics, computer science, engineering, and finance departments) who have had a course in probability theory. It covers Markov chains in discrete and continuous time, Poisson processes, renewal processes, martingales, and option pricing. One can only learn a subject by seeing it in action, so there are a large number of examples and more than 300 carefully chosen exercises to deepen the reader’s understanding. Drawing from teaching experience and student feedback, there are many new examples and problems with solutions that use TI-83 to eliminate the tedious details of solving linear equations by hand, and the collection of exercises is much improved, with many more biological examples. Originally included in previous editions, material too advanced for this first course in stochastic processes has been eliminated while treatment of other topics useful for applications has been expanded. In addition, the ordering of topics has been improved; for example, the difficult subject of martingales is delayed until its usefulness can be applied in the treatment of mathematical finance.