Constrained Control Problems of Discrete Processes


Book Description

The book gives a novel treatment of recent advances on constrained control problems with emphasis on the controllability, reachability of dynamical discrete-time systems. The new proposed approach provides the right setting for the study of qualitative properties of general types of dynamical systems in both discrete-time and continuous-time systems with possible applications to some control engineering models. Most of the material appears for the first time in a book form. The book is addressed to advanced students, postgraduate students and researchers interested in control system theory and optimal control.




Constrained Control Problems Of Discrete Processes


Book Description

The book gives a novel treatment of recent advances on constrained control problems with emphasis on the controllability, reachability of dynamical discrete-time systems. The new proposed approach provides the right setting for the study of qualitative properties of general types of dynamical systems in both discrete-time and continuous-time systems with possible applications to some control engineering models. Most of the material appears for the first time in a book form. The book is addressed to advanced students, postgraduate students and researchers interested in control system theory and optimal control.




Constrained Control and Estimation


Book Description

Recent developments in constrained control and estimation have created a need for this comprehensive introduction to the underlying fundamental principles. These advances have significantly broadened the realm of application of constrained control. - Using the principal tools of prediction and optimisation, examples of how to deal with constraints are given, placing emphasis on model predictive control. - New results combine a number of methods in a unique way, enabling you to build on your background in estimation theory, linear control, stability theory and state-space methods. - Companion web site, continually updated by the authors. Easy to read and at the same time containing a high level of technical detail, this self-contained, new approach to methods for constrained control in design will give you a full understanding of the subject.




Constrained Control of Uncertain, Time-Varying, Discrete-Time Systems


Book Description

A comprehensive development of interpolating control, this monograph demonstrates the reduced computational complexity of a ground-breaking technique compared with the established model predictive control. The text deals with the regulation problem for linear, time-invariant, discrete-time uncertain dynamical systems having polyhedral state and control constraints, with and without disturbances, and under state or output feedback. For output feedback a non-minimal state-space representation is used with old inputs and outputs as state variables. Constrained Control of Uncertain, Time-Varying, Discrete-time Systems details interpolating control in both its implicit and explicit forms. In the former at most two linear-programming or one quadratic-programming problem are solved on-line at each sampling instant to yield the value of the control variable. In the latter the control law is shown to be piecewise affine in the state, and so the state space is partitioned into polyhedral cells so that at each sampling interval the cell to which the measured state belongs must be determined. Interpolation is performed between vertex control, and a user-chosen control law in its maximal admissible set surrounding the origin. Novel proofs of recursive feasibility and asymptotic stability of the vertex control law, and of the interpolating control law are given. Algorithms for implicit and explicit interpolating control are presented in such a way that the reader may easily realize them. Each chapter includes illustrative examples, and comparisons with model predictive control in which the disparity in computational complexity is shown to be particularly in favour of interpolating control for high-order systems, and systems with uncertainty. Furthermore, the performance of the two methods proves similar except in those cases when a solution cannot be found with model predictive control at all. The book concludes with two high dimensional examples and a benchmark robust model predictive control problem: the non-isothermal continuously-stirred-tank reactor. For academic control researchers and students or for control engineers interested in implementing constrained control systems Constrained Control of Uncertain, Time-Varying, Discrete-time Systems will provide an attractive low-complexity control alternative for cases in which model predictive control is currently attempted.




Set-Theoretic Methods in Control


Book Description

The second edition of this monograph describes the set-theoretic approach for the control and analysis of dynamic systems, both from a theoretical and practical standpoint. This approach is linked to fundamental control problems, such as Lyapunov stability analysis and stabilization, optimal control, control under constraints, persistent disturbance rejection, and uncertain systems analysis and synthesis. Completely self-contained, this book provides a solid foundation of mathematical techniques and applications, extensive references to the relevant literature, and numerous avenues for further theoretical study. All the material from the first edition has been updated to reflect the most recent developments in the field, and a new chapter on switching systems has been added. Each chapter contains examples, case studies, and exercises to allow for a better understanding of theoretical concepts by practical application. The mathematical language is kept to the minimum level necessary for the adequate formulation and statement of the main concepts, yet allowing for a detailed exposition of the numerical algorithms for the solution of the proposed problems. Set-Theoretic Methods in Control will appeal to both researchers and practitioners in control engineering and applied mathematics. It is also well-suited as a textbook for graduate students in these areas. Praise for the First Edition "This is an excellent book, full of new ideas and collecting a lot of diverse material related to set-theoretic methods. It can be recommended to a wide control community audience." - B. T. Polyak, Mathematical Reviews "This book is an outstanding monograph of a recent research trend in control. It reflects the vast experience of the authors as well as their noticeable contributions to the development of this field...[It] is highly recommended to PhD students and researchers working in control engineering or applied mathematics. The material can also be used for graduate courses in these areas." - Octavian Pastravanu, Zentralblatt MATH




Decentralized Control of Constrained Linear Systems Via Convex Optimization Methods


Book Description

Decentralized control problems naturally arise in the control of large-scale networked systems. Such systems are regulated by a collection of local controllers in a decentralized manner, in the sense that each local controller is required to specify its control input based on its locally accessible sensor measurements. In this dissertation, we consider the decentralized control of discrete-time, linear systems subject to exogenous disturbances and polyhedral constraints on the state and input trajectories. The underlying system is composed of a finite collection of dynamically coupled subsystems, each of which is assumed to have a dedicated local controller. The decentralization of information is expressed according to sparsity constraints on the sensor measurements that each local controller has access to. In its most general form, the decentralized control problem amounts to an infinite-dimensional nonconvex program that is, in general, computationally intractable. The primary difficulty of the decentralized control problem stems from the potential informational coupling between the controllers. Specifically, in problems with nonclassical information structures, the actions taken by one controller can affect the information acquired by other controllers acting on the system. This gives rise to an incentive for controllers to communicate with each other via the actions that they undertake--the so-called signaling incentive. To complicate matters further, there may be hard constraints coupling the actions and local states being regulated by different controllers that must be jointly enforced with limited communication between the local controllers. In this dissertation, we abandon the search for the optimal decentralized control policy, and resort to approximation methods that enable the tractable calculation of feasible decentralized control policies. We first provide methods for the tractable calculation of decentralized control policies that are affinely parameterized in their measurement history. For problems with partially nested information structures, we show that the optimization over such a policy space admits an equivalent reformulation as a semi-infinite convex program. The optimal solution to these semi-inifinite programs can be calculated through the solution of a finite-dimensional conic program. For problems with nonclassical information structures, however, the optimization over such a policy space amounts to a semi-infinite nonconvex program. With the objective of alleviating the nonconvexity in such problems, we propose an approach to decentralized control design in which the information-coupling states are effectively treated as disturbances whose trajectories are constrained to take values in ellipsoidal "contract" sets whose location, scale, and orientation are jointly optimized with the affine decentralized control policy being used to control the system. The resulting problem is a semidefinite program, whose feasible solutions are guaranteed to be feasible for the original decentralized control design problem. Decentralized control policies that are computed according to such convex optimization methods are, in general, suboptimal. We, therefore, provide a method of bounding the suboptimality of feasible decentralized control policies through an information-based convex relaxation. Specifically, we characterize an expansion of the given information structure, which maximizes the optimal value of the decentralized control design problem associated with the expanded information structure, while guaranteeing that the expanded information structure be partially nested. The resulting decentralized control design problem admits an equivalent reformulation as an infinite-dimensional convex program. We construct a further constraint relaxation of this problem via its partial dualization and a restriction to affine dual control policies, which yields a finite-dimensional conic program whose optimal value is a provable lower bound on the minimum cost of the original decentralized control design problem. Finally, we apply our convexd programming approach to control design to the decentralized control of distributed energy resources in radial power distribution systems. We investigate the problem of designing a fully decentralized disturbance-feedback controller that minimizes the expected cost of serving demand, while guaranteeing the satisfaction of individual resource and distribution system voltage constraints. A direct application of our aforementioned control design methods enables both the calculation of affine controllers and the bounding of their suboptimality through the solution of finite-dimensional conic programs. A case study demonstrates that the decentralized affine controller we compute can perform close to optimal.




Algorithms for Convex Optimization


Book Description

In the last few years, Algorithms for Convex Optimization have revolutionized algorithm design, both for discrete and continuous optimization problems. For problems like maximum flow, maximum matching, and submodular function minimization, the fastest algorithms involve essential methods such as gradient descent, mirror descent, interior point methods, and ellipsoid methods. The goal of this self-contained book is to enable researchers and professionals in computer science, data science, and machine learning to gain an in-depth understanding of these algorithms. The text emphasizes how to derive key algorithms for convex optimization from first principles and how to establish precise running time bounds. This modern text explains the success of these algorithms in problems of discrete optimization, as well as how these methods have significantly pushed the state of the art of convex optimization itself.




Interior-point Polynomial Algorithms in Convex Programming


Book Description

Specialists working in the areas of optimization, mathematical programming, or control theory will find this book invaluable for studying interior-point methods for linear and quadratic programming, polynomial-time methods for nonlinear convex programming, and efficient computational methods for control problems and variational inequalities. A background in linear algebra and mathematical programming is necessary to understand the book. The detailed proofs and lack of "numerical examples" might suggest that the book is of limited value to the reader interested in the practical aspects of convex optimization, but nothing could be further from the truth. An entire chapter is devoted to potential reduction methods precisely because of their great efficiency in practice.




Formal Methods for Discrete-Time Dynamical Systems


Book Description

This book bridges fundamental gaps between control theory and formal methods. Although it focuses on discrete-time linear and piecewise affine systems, it also provides general frameworks for abstraction, analysis, and control of more general models. The book is self-contained, and while some mathematical knowledge is necessary, readers are not expected to have a background in formal methods or control theory. It rigorously defines concepts from formal methods, such as transition systems, temporal logics, model checking and synthesis. It then links these to the infinite state dynamical systems through abstractions that are intuitive and only require basic convex-analysis and control-theory terminology, which is provided in the appendix. Several examples and illustrations help readers understand and visualize the concepts introduced throughout the book.