Book Description
This book provides an introductory yet rigorous treatment of Pontryagin’s Maximum Principle and its application to optimal control problems when simple and complex constraints act on state and control variables, the two classes of variable in such problems. The achievements resulting from first-order variational methods are illustrated with reference to a large number of problems that, almost universally, relate to a particular second-order, linear and time-invariant dynamical system, referred to as the double integrator. The book is ideal for students who have some knowledge of the basics of system and control theory and possess the calculus background typically taught in undergraduate curricula in engineering. Optimal control theory, of which the Maximum Principle must be considered a cornerstone, has been very popular ever since the late 1950s. However, the possibly excessive initial enthusiasm engendered by its perceived capability to solve any kind of problem gave way to its equally unjustified rejection when it came to be considered as a purely abstract concept with no real utility. In recent years it has been recognized that the truth lies somewhere between these two extremes, and optimal control has found its (appropriate yet limited) place within any curriculum in which system and control theory plays a significant role.