Uncertain Optimal Control


Book Description

This book introduces the theory and applications of uncertain optimal control, and establishes two types of models including expected value uncertain optimal control and optimistic value uncertain optimal control. These models, which have continuous-time forms and discrete-time forms, make use of dynamic programming. The uncertain optimal control theory relates to equations of optimality, uncertain bang-bang optimal control, optimal control with switched uncertain system, and optimal control for uncertain system with time-delay. Uncertain optimal control has applications in portfolio selection, engineering, and games. The book is a useful resource for researchers, engineers, and students in the fields of mathematics, cybernetics, operations research, industrial engineering, artificial intelligence, economics, and management science.




Uncertain Models and Robust Control


Book Description

This coherent introduction to the theory and methods of robust control system design clarifies and unifies the presentation of significant derivations and proofs. The book contains a thorough treatment of important material of uncertainties and robust control otherwise scattered throughout the literature.




Optimal Control of PDEs under Uncertainty


Book Description

This book provides a direct and comprehensive introduction to theoretical and numerical concepts in the emerging field of optimal control of partial differential equations (PDEs) under uncertainty. The main objective of the book is to offer graduate students and researchers a smooth transition from optimal control of deterministic PDEs to optimal control of random PDEs. Coverage includes uncertainty modelling in control problems, variational formulation of PDEs with random inputs, robust and risk-averse formulations of optimal control problems, existence theory and numerical resolution methods. The exposition focusses on the entire path, starting from uncertainty modelling and ending in the practical implementation of numerical schemes for the numerical approximation of the considered problems. To this end, a selected number of illustrative examples are analysed in detail throughout the book. Computer codes, written in MatLab, are provided for all these examples. This book is adressed to graduate students and researches in Engineering, Physics and Mathematics who are interested in optimal control and optimal design for random partial differential equations.




Estimators for Uncertain Dynamic Systems


Book Description

When solving the control and design problems in aerospace and naval engi neering, energetics, economics, biology, etc., we need to know the state of investigated dynamic processes. The presence of inherent uncertainties in the description of these processes and of noises in measurement devices leads to the necessity to construct the estimators for corresponding dynamic systems. The estimators recover the required information about system state from mea surement data. An attempt to solve the estimation problems in an optimal way results in the formulation of different variational problems. The type and complexity of these variational problems depend on the process model, the model of uncertainties, and the estimation performance criterion. A solution of variational problem determines an optimal estimator. Howerever, there exist at least two reasons why we use nonoptimal esti mators. The first reason is that the numerical algorithms for solving the corresponding variational problems can be very difficult for numerical imple mentation. For example, the dimension of these algorithms can be very high.




Optimal Control, Expectations and Uncertainty


Book Description

An examination of how the rational expectations revolution and game theory have enhanced the understanding of how an economy functions.




Randomized Algorithms for Analysis and Control of Uncertain Systems


Book Description

The presence of uncertainty in a system description has always been a critical issue in control. The main objective of Randomized Algorithms for Analysis and Control of Uncertain Systems, with Applications (Second Edition) is to introduce the reader to the fundamentals of probabilistic methods in the analysis and design of systems subject to deterministic and stochastic uncertainty. The approach propounded by this text guarantees a reduction in the computational complexity of classical control algorithms and in the conservativeness of standard robust control techniques. The second edition has been thoroughly updated to reflect recent research and new applications with chapters on statistical learning theory, sequential methods for control and the scenario approach being completely rewritten. Features: · self-contained treatment explaining Monte Carlo and Las Vegas randomized algorithms from their genesis in the principles of probability theory to their use for system analysis; · development of a novel paradigm for (convex and nonconvex) controller synthesis in the presence of uncertainty and in the context of randomized algorithms; · comprehensive treatment of multivariate sample generation techniques, including consideration of the difficulties involved in obtaining identically and independently distributed samples; · applications of randomized algorithms in various endeavours, such as PageRank computation for the Google Web search engine, unmanned aerial vehicle design (both new in the second edition), congestion control of high-speed communications networks and stability of quantized sampled-data systems. Randomized Algorithms for Analysis and Control of Uncertain Systems (second edition) is certain to interest academic researchers and graduate control students working in probabilistic, robust or optimal control methods and control engineers dealing with system uncertainties. The present book is a very timely contribution to the literature. I have no hesitation in asserting that it will remain a widely cited reference work for many years. M. Vidyasagar




Optimal Control


Book Description

Numerous examples highlight this treatment of the use of linear quadratic Gaussian methods for control system design. It explores linear optimal control theory from an engineering viewpoint, with illustrations of practical applications. Key topics include loop-recovery techniques, frequency shaping, and controller reduction. Numerous examples and complete solutions. 1990 edition.




Optimal Control


Book Description

From the reviews: "The style of the book reflects the author’s wish to assist in the effective learning of optimal control by suitable choice of topics, the mathematical level used, and by including numerous illustrated examples. . . .In my view the book suits its function and purpose, in that it gives a student a comprehensive coverage of optimal control in an easy-to-read fashion." —Measurement and Control




Adaptive Dynamic Programming: Single and Multiple Controllers


Book Description

This book presents a class of novel optimal control methods and games schemes based on adaptive dynamic programming techniques. For systems with one control input, the ADP-based optimal control is designed for different objectives, while for systems with multi-players, the optimal control inputs are proposed based on games. In order to verify the effectiveness of the proposed methods, the book analyzes the properties of the adaptive dynamic programming methods, including convergence of the iterative value functions and the stability of the system under the iterative control laws. Further, to substantiate the mathematical analysis, it presents various application examples, which provide reference to real-world practices.




Stochastic Controls


Book Description

As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.