Optimal Control and Dynamic Games


Book Description

Optimal Control and Dynamic Games has been edited to honor the outstanding contributions of Professor Suresh Sethi in the fields of Applied Optimal Control. Professor Sethi is internationally one of the foremost experts in this field. He is, among others, co-author of the popular textbook "Sethi and Thompson: Optimal Control Theory: Applications to Management Science and Economics". The book consists of a collection of essays by some of the best known scientists in the field, covering diverse aspects of applications of optimal control and dynamic games to problems in Finance, Management Science, Economics, and Operations Research. In doing so, it provides both a state-of-the-art overview over recent developments in the field, and a reference work covering the wide variety of contemporary questions that can be addressed with optimal control tools, and demonstrates the fruitfulness of the methodology.




H∞-Optimal Control and Related Minimax Design Problems


Book Description

This book is devoted to one of the fastest developing fields in modern control theory - the so-called H-infinity optimal control theory. The book can be used for a second or third year graduate level course in the subject, and researchers working in the area will find the book useful as a standard reference. Based mostly on recent work of the authors, the book is written on a good mathematical level. Many results in it are original, interesting, and inspirational. The topic is central to modern control and hence this definitive book is highly recommended to anyone who wishes to catch up with important theoretical developments in applied mathematics and control.




Dynamic Optimization, Second Edition


Book Description

Since its initial publication, this text has defined courses in dynamic optimization taught to economics and management science students. The two-part treatment covers the calculus of variations and optimal control. 1998 edition.




LQ Dynamic Optimization and Differential Games


Book Description

Game theory is the theory of social situations, and the majority of research into the topic focuses on how groups of people interact by developing formulas and algorithms to identify optimal strategies and to predict the outcome of interactions. Only fifty years old, it has already revolutionized economics and finance, and is spreading rapidly to a wide variety of fields. LQ Dynamic Optimization and Differential Games is an assessment of the state of the art in its field and the first modern book on linear-quadratic game theory, one of the most commonly used tools for modelling and analysing strategic decision making problems in economics and management. Linear quadratic dynamic models have a long tradition in economics, operations research and control engineering; and the author begins by describing the one-decision maker LQ dynamic optimization problem before introducing LQ differential games. Covers cooperative and non-cooperative scenarios, and treats the standard information structures (open-loop and feedback). Includes real-life economic examples to illustrate theoretical concepts and results. Presents problem formulations and sound mathematical problem analysis. Includes exercises and solutions, enabling use for self-study or as a course text. Supported by a website featuring solutions to exercises, further examples and computer code for numerical examples. LQ Dynamic Optimization and Differential Games offers a comprehensive introduction to the theory and practice of this extensively used class of economic models, and will appeal to applied mathematicians and econometricians as well as researchers and senior undergraduate/graduate students in economics, mathematics, engineering and management science.




Dynamic Optimization and Differential Games


Book Description

This book has been written to address the increasing number of Operations Research and Management Science problems (that is, applications) that involve the explicit consideration of time and of gaming among multiple agents. It is a book that will be used both as a textbook and as a reference and guide by those whose work involves the theoretical aspects of dynamic optimization and differential games.




Inverse Optimal Control and Inverse Noncooperative Dynamic Game Theory


Book Description

This book presents a novel unified treatment of inverse problems in optimal control and noncooperative dynamic game theory. It provides readers with fundamental tools for the development of practical algorithms to solve inverse problems in control, robotics, biology, and economics. The treatment involves the application of Pontryagin's minimum principle to a variety of inverse problems and proposes algorithms founded on the elegance of dynamic optimization theory. There is a balanced emphasis between fundamental theoretical questions and practical matters. The text begins by providing an introduction and background to its topics. It then discusses discrete-time and continuous-time inverse optimal control. The focus moves on to differential and dynamic games and the book is completed by consideration of relevant applications. The algorithms and theoretical results developed in Inverse Optimal Control and Inverse Noncooperative Dynamic Game Theory provide new insights into information requirements for solving inverse problems, including the structure, quantity, and types of state and control data. These insights have significant practical consequences in the design of technologies seeking to exploit inverse techniques such as collaborative robots, driver-assistance technologies, and autonomous systems. The book will therefore be of interest to researchers, engineers, and postgraduate students in several disciplines within the area of control and robotics.







Stochastic and Differential Games


Book Description

The theory of two-person, zero-sum differential games started at the be­ ginning of the 1960s with the works of R. Isaacs in the United States and L. S. Pontryagin and his school in the former Soviet Union. Isaacs based his work on the Dynamic Programming method. He analyzed many special cases of the partial differential equation now called Hamilton­ Jacobi-Isaacs-briefiy HJI-trying to solve them explicitly and synthe­ sizing optimal feedbacks from the solution. He began a study of singular surfaces that was continued mainly by J. Breakwell and P. Bernhard and led to the explicit solution of some low-dimensional but highly nontriv­ ial games; a recent survey of this theory can be found in the book by J. Lewin entitled Differential Games (Springer, 1994). Since the early stages of the theory, several authors worked on making the notion of value of a differential game precise and providing a rigorous derivation of the HJI equation, which does not have a classical solution in most cases; we mention here the works of W. Fleming, A. Friedman (see his book, Differential Games, Wiley, 1971), P. P. Varaiya, E. Roxin, R. J. Elliott and N. J. Kalton, N. N. Krasovskii, and A. I. Subbotin (see their book Po­ sitional Differential Games, Nauka, 1974, and Springer, 1988), and L. D. Berkovitz. A major breakthrough was the introduction in the 1980s of two new notions of generalized solution for Hamilton-Jacobi equations, namely, viscosity solutions, by M. G. Crandall and P. -L.




Control Theory and Dynamic Games in Economic Policy Analysis


Book Description

This book deals with the stabilisation and control of centralised policy-making and its economic implications.




Optimal Control Theory with Applications in Economics


Book Description

A rigorous introduction to optimal control theory, with an emphasis on applications in economics. This book bridges optimal control theory and economics, discussing ordinary differential equations, optimal control, game theory, and mechanism design in one volume. Technically rigorous and largely self-contained, it provides an introduction to the use of optimal control theory for deterministic continuous-time systems in economics. The theory of ordinary differential equations (ODEs) is the backbone of the theory developed in the book, and chapter 2 offers a detailed review of basic concepts in the theory of ODEs, including the solution of systems of linear ODEs, state-space analysis, potential functions, and stability analysis. Following this, the book covers the main results of optimal control theory, in particular necessary and sufficient optimality conditions; game theory, with an emphasis on differential games; and the application of control-theoretic concepts to the design of economic mechanisms. Appendixes provide a mathematical review and full solutions to all end-of-chapter problems. The material is presented at three levels: single-person decision making; games, in which a group of decision makers interact strategically; and mechanism design, which is concerned with a designer's creation of an environment in which players interact to maximize the designer's objective. The book focuses on applications; the problems are an integral part of the text. It is intended for use as a textbook or reference for graduate students, teachers, and researchers interested in applications of control theory beyond its classical use in economic growth. The book will also appeal to readers interested in a modeling approach to certain practical problems involving dynamic continuous-time models.