Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations


Book Description

This softcover book is a self-contained account of the theory of viscosity solutions for first-order partial differential equations of Hamilton–Jacobi type and its interplay with Bellman’s dynamic programming approach to optimal control and differential games. It will be of interest to scientists involved in the theory of optimal control of deterministic linear and nonlinear systems. The work may be used by graduate students and researchers in control theory both as an introductory textbook and as an up-to-date reference book.




Hamilton-Jacobi-Bellman Equations


Book Description

Optimal feedback control arises in different areas such as aerospace engineering, chemical processing, resource economics, etc. In this context, the application of dynamic programming techniques leads to the solution of fully nonlinear Hamilton-Jacobi-Bellman equations. This book presents the state of the art in the numerical approximation of Hamilton-Jacobi-Bellman equations, including post-processing of Galerkin methods, high-order methods, boundary treatment in semi-Lagrangian schemes, reduced basis methods, comparison principles for viscosity solutions, max-plus methods, and the numerical approximation of Monge-Ampère equations. This book also features applications in the simulation of adaptive controllers and the control of nonlinear delay differential equations. Contents From a monotone probabilistic scheme to a probabilistic max-plus algorithm for solving Hamilton–Jacobi–Bellman equations Improving policies for Hamilton–Jacobi–Bellman equations by postprocessing Viability approach to simulation of an adaptive controller Galerkin approximations for the optimal control of nonlinear delay differential equations Efficient higher order time discretization schemes for Hamilton–Jacobi–Bellman equations based on diagonally implicit symplectic Runge–Kutta methods Numerical solution of the simple Monge–Ampere equation with nonconvex Dirichlet data on nonconvex domains On the notion of boundary conditions in comparison principles for viscosity solutions Boundary mesh refinement for semi-Lagrangian schemes A reduced basis method for the Hamilton–Jacobi–Bellman equation within the European Union Emission Trading Scheme




Data-Driven Science and Engineering


Book Description

A textbook covering data-science and machine learning methods for modelling and control in engineering and science, with Python and MATLAB®.




Stochastic Controls


Book Description

As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.




Foundations of Dynamic Economic Analysis


Book Description

Foundations of Dynamic Economic Analysis presents a modern and thorough exposition of the fundamental mathematical formalism used to study optimal control theory, i.e., continuous time dynamic economic processes, and to interpret dynamic economic behavior. The style of presentation, with its continual emphasis on the economic interpretation of mathematics and models, distinguishes it from several other excellent texts on the subject. This approach is aided dramatically by introducing the dynamic envelope theorem and the method of comparative dynamics early in the exposition. Accordingly, motivated and economically revealing proofs of the transversality conditions come about by use of the dynamic envelope theorem. Furthermore, such sequencing of the material naturally leads to the development of the primal-dual method of comparative dynamics and dynamic duality theory, two modern approaches used to tease out the empirical content of optimal control models. The stylistic approach ultimately draws attention to the empirical richness of optimal control theory, a feature missing in virtually all other textbooks of this type.




Hamilton-Jacobi Equations: Approximations, Numerical Analysis and Applications


Book Description

These Lecture Notes contain the material relative to the courses given at the CIME summer school held in Cetraro, Italy from August 29 to September 3, 2011. The topic was "Hamilton-Jacobi Equations: Approximations, Numerical Analysis and Applications". The courses dealt mostly with the following subjects: first order and second order Hamilton-Jacobi-Bellman equations, properties of viscosity solutions, asymptotic behaviors, mean field games, approximation and numerical methods, idempotent analysis. The content of the courses ranged from an introduction to viscosity solutions to quite advanced topics, at the cutting edge of research in the field. We believe that they opened perspectives on new and delicate issues. These lecture notes contain four contributions by Yves Achdou (Finite Difference Methods for Mean Field Games), Guy Barles (An Introduction to the Theory of Viscosity Solutions for First-order Hamilton-Jacobi Equations and Applications), Hitoshi Ishii (A Short Introduction to Viscosity Solutions and the Large Time Behavior of Solutions of Hamilton-Jacobi Equations) and Grigory Litvinov (Idempotent/Tropical Analysis, the Hamilton-Jacobi and Bellman Equations).




Optimal Control: Novel Directions and Applications


Book Description

Focusing on applications to science and engineering, this book presents the results of the ITN-FP7 SADCO network’s innovative research in optimization and control in the following interconnected topics: optimality conditions in optimal control, dynamic programming approaches to optimal feedback synthesis and reachability analysis, and computational developments in model predictive control. The novelty of the book resides in the fact that it has been developed by early career researchers, providing a good balance between clarity and scientific rigor. Each chapter features an introduction addressed to PhD students and some original contributions aimed at specialist researchers. Requiring only a graduate mathematical background, the book is self-contained. It will be of particular interest to graduate and advanced undergraduate students, industrial practitioners and to senior scientists wishing to update their knowledge.




Variational Calculus, Optimal Control and Applications


Book Description

The 12th conference on "Variational Calculus, Optimal Control and Applications" took place September 23-27, 1996, in Trassenheide on the Baltic Sea island of Use dom. Seventy mathematicians from ten countries participated. The preceding eleven conferences, too, were held in places of natural beauty throughout West Pomerania; the first time, in 1972, in Zinnowitz, which is in the immediate area of Trassenheide. The conferences were founded, and led ten times, by Professor Bittner (Greifswald) and Professor KlCitzler (Leipzig), who both celebrated their 65th birthdays in 1996. The 12th conference in Trassenheide, was, therefore, also dedicated to L. Bittner and R. Klotzler. Both scientists made a lasting impression on control theory in the former GDR. Originally, the conferences served to promote the exchange of research results. In the first years, most of the lectures were theoretical, but in the last few conferences practical applications have been given more attention. Besides their pioneering theoretical works, both honorees have also always dealt with applications problems. L. Bittner has, for example, examined optimal control of nuclear reactors and associated safety aspects. Since 1992 he has been working on applications in optimal control in flight dynamics. R. Klotzler recently applied his results on optimal autobahn planning to the south tangent in Leipzig. The contributions published in these proceedings reflect the trend to practical problems; starting points are often questions from flight dynamics.




Semi-Lagrangian Approximation Schemes for Linear and Hamilton-Jacobi Equations


Book Description

This largely self-contained book provides a unified framework of semi-Lagrangian strategy for the approximation of hyperbolic PDEs, with a special focus on Hamilton-Jacobi equations. The authors provide a rigorous discussion of the theory of viscosity solutions and the concepts underlying the construction and analysis of difference schemes; they then proceed to high-order semi-Lagrangian schemes and their applications to problems in fluid dynamics, front propagation, optimal control, and image processing. The developments covered in the text and the references come from a wide range of literature.




Hamilton-Jacobi-Bellman Equations


Book Description

Optimal feedback control arises in different areas such as aerospace engineering, chemical processing, resource economics, etc. In this context, the application of dynamic programming techniques leads to the solution of fully nonlinear Hamilton-Jacobi-Bellman equations. This book presents the state of the art in the numerical approximation of Hamilton-Jacobi-Bellman equations, including post-processing of Galerkin methods, high-order methods, boundary treatment in semi-Lagrangian schemes, reduced basis methods, comparison principles for viscosity solutions, max-plus methods, and the numerical approximation of Monge-Ampère equations. This book also features applications in the simulation of adaptive controllers and the control of nonlinear delay differential equations. Contents From a monotone probabilistic scheme to a probabilistic max-plus algorithm for solving Hamilton–Jacobi–Bellman equations Improving policies for Hamilton–Jacobi–Bellman equations by postprocessing Viability approach to simulation of an adaptive controller Galerkin approximations for the optimal control of nonlinear delay differential equations Efficient higher order time discretization schemes for Hamilton–Jacobi–Bellman equations based on diagonally implicit symplectic Runge–Kutta methods Numerical solution of the simple Monge–Ampere equation with nonconvex Dirichlet data on nonconvex domains On the notion of boundary conditions in comparison principles for viscosity solutions Boundary mesh refinement for semi-Lagrangian schemes A reduced basis method for the Hamilton–Jacobi–Bellman equation within the European Union Emission Trading Scheme