Stochastic Optimal Control in Infinite Dimension


Book Description

Providing an introduction to stochastic optimal control in infinite dimension, this book gives a complete account of the theory of second-order HJB equations in infinite-dimensional Hilbert spaces, focusing on its applicability to associated stochastic optimal control problems. It features a general introduction to optimal stochastic control, including basic results (e.g. the dynamic programming principle) with proofs, and provides examples of applications. A complete and up-to-date exposition of the existing theory of viscosity solutions and regular solutions of second-order HJB equations in Hilbert spaces is given, together with an extensive survey of other methods, with a full bibliography. In particular, Chapter 6, written by M. Fuhrman and G. Tessitore, surveys the theory of regular solutions of HJB equations arising in infinite-dimensional stochastic control, via BSDEs. The book is of interest to both pure and applied researchers working in the control theory of stochastic PDEs, and in PDEs in infinite dimension. Readers from other fields who want to learn the basic theory will also find it useful. The prerequisites are: standard functional analysis, the theory of semigroups of operators and its use in the study of PDEs, some knowledge of the dynamic programming approach to stochastic optimal control problems in finite dimension, and the basics of stochastic analysis and stochastic equations in infinite-dimensional spaces.










Optimal Control Theory for Infinite Dimensional Systems


Book Description

Infinite dimensional systems can be used to describe many phenomena in the real world. As is well known, heat conduction, properties of elastic plastic material, fluid dynamics, diffusion-reaction processes, etc., all lie within this area. The object that we are studying (temperature, displace ment, concentration, velocity, etc.) is usually referred to as the state. We are interested in the case where the state satisfies proper differential equa tions that are derived from certain physical laws, such as Newton's law, Fourier's law etc. The space in which the state exists is called the state space, and the equation that the state satisfies is called the state equation. By an infinite dimensional system we mean one whose corresponding state space is infinite dimensional. In particular, we are interested in the case where the state equation is one of the following types: partial differential equation, functional differential equation, integro-differential equation, or abstract evolution equation. The case in which the state equation is being a stochastic differential equation is also an infinite dimensional problem, but we will not discuss such a case in this book.




Stability of Infinite Dimensional Stochastic Differential Equations with Applications


Book Description

Stochastic differential equations in infinite dimensional spaces are motivated by the theory and analysis of stochastic processes and by applications such as stochastic control, population biology, and turbulence, where the analysis and control of such systems involves investigating their stability. While the theory of such equations is well establ




Representation and Control of Infinite Dimensional Systems


Book Description

This unified, revised second edition of a two-volume set is a self-contained account of quadratic cost optimal control for a large class of infinite-dimensional systems. The original editions received outstanding reviews, yet this new edition is more concise and self-contained. New material has been added to reflect the growth in the field over the past decade. There is a unique chapter on semigroup theory of linear operators that brings together advanced concepts and techniques which are usually treated independently. The material on delay systems and structural operators has not yet appeared anywhere in book form.




General Pontryagin-Type Stochastic Maximum Principle and Backward Stochastic Evolution Equations in Infinite Dimensions


Book Description

The classical Pontryagin maximum principle (addressed to deterministic finite dimensional control systems) is one of the three milestones in modern control theory. The corresponding theory is by now well-developed in the deterministic infinite dimensional setting and for the stochastic differential equations. However, very little is known about the same problem but for controlled stochastic (infinite dimensional) evolution equations when the diffusion term contains the control variables and the control domains are allowed to be non-convex. Indeed, it is one of the longstanding unsolved problems in stochastic control theory to establish the Pontryagin type maximum principle for this kind of general control systems: this book aims to give a solution to this problem. This book will be useful for both beginners and experts who are interested in optimal control theory for stochastic evolution equations.




Infinite Dimensional And Finite Dimensional Stochastic Equations And Applications In Physics


Book Description

This volume contains survey articles on various aspects of stochastic partial differential equations (SPDEs) and their applications in stochastic control theory and in physics.The topics presented in this volume are:This book is intended not only for graduate students in mathematics or physics, but also for mathematicians, mathematical physicists, theoretical physicists, and science researchers interested in the physical applications of the theory of stochastic processes.




Optimization, Control, and Applications of Stochastic Systems


Book Description

This volume provides a general overview of discrete- and continuous-time Markov control processes and stochastic games, along with a look at the range of applications of stochastic control and some of its recent theoretical developments. These topics include various aspects of dynamic programming, approximation algorithms, and infinite-dimensional linear programming. In all, the work comprises 18 carefully selected papers written by experts in their respective fields. Optimization, Control, and Applications of Stochastic Systems will be a valuable resource for all practitioners, researchers, and professionals in applied mathematics and operations research who work in the areas of stochastic control, mathematical finance, queueing theory, and inventory systems. It may also serve as a supplemental text for graduate courses in optimal control and dynamic games.




Infinite Horizon Optimal Control


Book Description

This monograph deals with various classes of deterministic continuous time optimal control problems wh ich are defined over unbounded time intervala. For these problems, the performance criterion is described by an improper integral and it is possible that, when evaluated at a given admissible element, this criterion is unbounded. To cope with this divergence new optimality concepts; referred to here as "overtaking", "weakly overtaking", "agreeable plans", etc. ; have been proposed. The motivation for studying these problems arisee primarily from the economic and biological aciences where models of this nature arise quite naturally since no natural bound can be placed on the time horizon when one considers the evolution of the state of a given economy or species. The reeponsibility for the introduction of this interesting class of problems rests with the economiste who first studied them in the modeling of capital accumulation processes. Perhaps the earliest of these was F. Ramsey who, in his seminal work on a theory of saving in 1928, considered a dynamic optimization model defined on an infinite time horizon. Briefly, this problem can be described as a "Lagrange problem with unbounded time interval". The advent of modern control theory, particularly the formulation of the famoue Maximum Principle of Pontryagin, has had a considerable impact on the treatment of these models as well as optimization theory in general.