Optimal Control of Partial Differential Equations


Book Description

Optimal control theory is concerned with finding control functions that minimize cost functions for systems described by differential equations. This book focuses on optimal control problems where the state equation is an elliptic or parabolic partial differential equation. It includes topics on the existence of optimal solutions.







Trends in Control Theory and Partial Differential Equations


Book Description

This book presents cutting-edge contributions in the areas of control theory and partial differential equations. Over the decades, control theory has had deep and fruitful interactions with the theory of partial differential equations (PDEs). Well-known examples are the study of the generalized solutions of Hamilton-Jacobi-Bellman equations arising in deterministic and stochastic optimal control and the development of modern analytical tools to study the controllability of infinite dimensional systems governed by PDEs. In the present volume, leading experts provide an up-to-date overview of the connections between these two vast fields of mathematics. Topics addressed include regularity of the value function associated to finite dimensional control systems, controllability and observability for PDEs, and asymptotic analysis of multiagent systems. The book will be of interest for both researchers and graduate students working in these areas.




Control Theory of Partial Differential Equations


Book Description

The field of control theory in PDEs has broadened considerably as more realistic models have been introduced and investigated. This book presents a broad range of recent developments, new discoveries, and mathematical tools in the field. The authors discuss topics such as elasticity, thermo-elasticity, aero-elasticity, interactions between fluids a




Optimal Control of Partial Differential Equations


Book Description

This is a book on optimal control problems (OCPs) for partial differential equations (PDEs) that evolved from a series of courses taught by the authors in the last few years at Politecnico di Milano, both at the undergraduate and graduate levels. The book covers the whole range spanning from the setup and the rigorous theoretical analysis of OCPs, the derivation of the system of optimality conditions, the proposition of suitable numerical methods, their formulation, their analysis, including their application to a broad set of problems of practical relevance. The first introductory chapter addresses a handful of representative OCPs and presents an overview of the associated mathematical issues. The rest of the book is organized into three parts: part I provides preliminary concepts of OCPs for algebraic and dynamical systems; part II addresses OCPs involving linear PDEs (mostly elliptic and parabolic type) and quadratic cost functions; part III deals with more general classes of OCPs that stand behind the advanced applications mentioned above. Starting from simple problems that allow a “hands-on” treatment, the reader is progressively led to a general framework suitable to face a broader class of problems. Moreover, the inclusion of many pseudocodes allows the reader to easily implement the algorithms illustrated throughout the text. The three parts of the book are suitable to readers with variable mathematical backgrounds, from advanced undergraduate to Ph.D. levels and beyond. We believe that applied mathematicians, computational scientists, and engineers may find this book useful for a constructive approach toward the solution of OCPs in the context of complex applications.




Nonlinear Optimal Control Theory


Book Description

Nonlinear Optimal Control Theory presents a deep, wide-ranging introduction to the mathematical theory of the optimal control of processes governed by ordinary differential equations and certain types of differential equations with memory. Many examples illustrate the mathematical issues that need to be addressed when using optimal control techniques in diverse areas. Drawing on classroom-tested material from Purdue University and North Carolina State University, the book gives a unified account of bounded state problems governed by ordinary, integrodifferential, and delay systems. It also discusses Hamilton-Jacobi theory. By providing a sufficient and rigorous treatment of finite dimensional control problems, the book equips readers with the foundation to deal with other types of control problems, such as those governed by stochastic differential equations, partial differential equations, and differential games.




Partial Differential Equations and Group Theory


Book Description

Ordinary differential control thPory (the classical theory) studies input/output re lations defined by systems of ordinary differential equations (ODE). The various con cepts that can be introduced (controllability, observability, invertibility, etc. ) must be tested on formal objects (matrices, vector fields, etc. ) by means of formal operations (multiplication, bracket, rank, etc. ), but without appealing to the explicit integration (search for trajectories, etc. ) of the given ODE. Many partial results have been re cently unified by means of new formal methods coming from differential geometry and differential algebra. However, certain problems (invariance, equivalence, linearization, etc. ) naturally lead to systems of partial differential equations (PDE). More generally, partial differential control theory studies input/output relations defined by systems of PDE (mechanics, thermodynamics, hydrodynamics, plasma physics, robotics, etc. ). One of the aims of this book is to extend the preceding con cepts to this new situation, where, of course, functional analysis and/or a dynamical system approach cannot be used. A link will be exhibited between this domain of applied mathematics and the famous 'Backlund problem', existing in the study of solitary waves or solitons. In particular, we shall show how the methods of differ ential elimination presented here will allow us to determine compatibility conditions on input and/or output as a better understanding of the foundations of control the ory. At the same time we shall unify differential geometry and differential algebra in a new framework, called differential algebraic geometry.







Control Theory and Optimization I


Book Description

The only monograph on the topic, this book concerns geometric methods in the theory of differential equations with quadratic right-hand sides, closely related to the calculus of variations and optimal control theory. Based on the author’s lectures, the book is addressed to undergraduate and graduate students, and scientific researchers.




Optimal Control of Systems Governed by Partial Differential Equations


Book Description

1. The development of a theory of optimal control (deterministic) requires the following initial data: (i) a control u belonging to some set ilIi ad (the set of 'admissible controls') which is at our disposition, (ii) for a given control u, the state y(u) of the system which is to be controlled is given by the solution of an equation (*) Ay(u)=given function ofu where A is an operator (assumed known) which specifies the system to be controlled (A is the 'model' of the system), (iii) the observation z(u) which is a function of y(u) (assumed to be known exactly; we consider only deterministic problems in this book), (iv) the "cost function" J(u) ("economic function") which is defined in terms of a numerical function z-+