Discrete-Time Inverse Optimal Control for Nonlinear Systems


Book Description

Discrete-Time Inverse Optimal Control for Nonlinear Systems proposes a novel inverse optimal control scheme for stabilization and trajectory tracking of discrete-time nonlinear systems. This avoids the need to solve the associated Hamilton-Jacobi-Bellman equation and minimizes a cost functional, resulting in a more efficient controller. Design More Efficient Controllers for Stabilization and Trajectory Tracking of Discrete-Time Nonlinear Systems The book presents two approaches for controller synthesis: the first based on passivity theory and the second on a control Lyapunov function (CLF). The synthesized discrete-time optimal controller can be directly implemented in real-time systems. The book also proposes the use of recurrent neural networks to model discrete-time nonlinear systems. Combined with the inverse optimal control approach, such models constitute a powerful tool to deal with uncertainties such as unmodeled dynamics and disturbances. Learn from Simulations and an In-Depth Case Study The authors include a variety of simulations to illustrate the effectiveness of the synthesized controllers for stabilization and trajectory tracking of discrete-time nonlinear systems. An in-depth case study applies the control schemes to glycemic control in patients with type 1 diabetes mellitus, to calculate the adequate insulin delivery rate required to prevent hyperglycemia and hypoglycemia levels. The discrete-time optimal and robust control techniques proposed can be used in a range of industrial applications, from aerospace and energy to biomedical and electromechanical systems. Highlighting optimal and efficient control algorithms, this is a valuable resource for researchers, engineers, and students working in nonlinear system control.




Neural Network Control of Nonlinear Discrete-Time Systems


Book Description

Intelligent systems are a hallmark of modern feedback control systems. But as these systems mature, we have come to expect higher levels of performance in speed and accuracy in the face of severe nonlinearities, disturbances, unforeseen dynamics, and unstructured uncertainties. Artificial neural networks offer a combination of adaptability, parallel processing, and learning capabilities that outperform other intelligent control methods in more complex systems. Borrowing from Biology Examining neurocontroller design in discrete-time for the first time, Neural Network Control of Nonlinear Discrete-Time Systems presents powerful modern control techniques based on the parallelism and adaptive capabilities of biological nervous systems. At every step, the author derives rigorous stability proofs and presents simulation examples to demonstrate the concepts. Progressive Development After an introduction to neural networks, dynamical systems, control of nonlinear systems, and feedback linearization, the book builds systematically from actuator nonlinearities and strict feedback in nonlinear systems to nonstrict feedback, system identification, model reference adaptive control, and novel optimal control using the Hamilton-Jacobi-Bellman formulation. The author concludes by developing a framework for implementing intelligent control in actual industrial systems using embedded hardware. Neural Network Control of Nonlinear Discrete-Time Systems fosters an understanding of neural network controllers and explains how to build them using detailed derivations, stability analysis, and computer simulations.




Neural Systems for Control


Book Description

Control problems offer an industrially important application and a guide to understanding control systems for those working in Neural Networks. Neural Systems for Control represents the most up-to-date developments in the rapidly growing aplication area of neural networks and focuses on research in natural and artifical neural systems directly applicable to control or making use of modern control theory. The book covers such important new developments in control systems such as intelligent sensors in semiconductor wafer manufacturing; the relation between muscles and cerebral neurons in speech recognition; online compensation of reconfigurable control for spacecraft aircraft and other systems; applications to rolling mills, robotics and process control; the usage of past output data to identify nonlinear systems by neural networks; neural approximate optimal control; model-free nonlinear control; and neural control based on a regulation of physiological investigation/blood pressure control. All researchers and students dealing with control systems will find the fascinating Neural Systems for Control of immense interest and assistance. - Focuses on research in natural and artifical neural systems directly applicable to contol or making use of modern control theory - Represents the most up-to-date developments in this rapidly growing application area of neural networks - Takes a new and novel approach to system identification and synthesis




Self-Learning Optimal Control of Nonlinear Systems


Book Description

This book presents a class of novel, self-learning, optimal control schemes based on adaptive dynamic programming techniques, which quantitatively obtain the optimal control schemes of the systems. It analyzes the properties identified by the programming methods, including the convergence of the iterative value functions and the stability of the system under iterative control laws, helping to guarantee the effectiveness of the methods developed. When the system model is known, self-learning optimal control is designed on the basis of the system model; when the system model is not known, adaptive dynamic programming is implemented according to the system data, effectively making the performance of the system converge to the optimum. With various real-world examples to complement and substantiate the mathematical analysis, the book is a valuable guide for engineers, researchers, and students in control science and engineering.




Nonlinear and Optimal Control Systems


Book Description

Designed for one-semester introductory senior-or graduate-level course, the authors provide the student with an introduction of analysis techniques used in the design of nonlinear and optimal feedback control systems. There is special emphasis on the fundamental topics of stability, controllability, and optimality, and on the corresponding geometry associated with these topics. Each chapter contains several examples and a variety of exercises.




Adaptive Dynamic Programming: Single and Multiple Controllers


Book Description

This book presents a class of novel optimal control methods and games schemes based on adaptive dynamic programming techniques. For systems with one control input, the ADP-based optimal control is designed for different objectives, while for systems with multi-players, the optimal control inputs are proposed based on games. In order to verify the effectiveness of the proposed methods, the book analyzes the properties of the adaptive dynamic programming methods, including convergence of the iterative value functions and the stability of the system under the iterative control laws. Further, to substantiate the mathematical analysis, it presents various application examples, which provide reference to real-world practices.




Adaptive Dynamic Programming with Applications in Optimal Control


Book Description

This book covers the most recent developments in adaptive dynamic programming (ADP). The text begins with a thorough background review of ADP making sure that readers are sufficiently familiar with the fundamentals. In the core of the book, the authors address first discrete- and then continuous-time systems. Coverage of discrete-time systems starts with a more general form of value iteration to demonstrate its convergence, optimality, and stability with complete and thorough theoretical analysis. A more realistic form of value iteration is studied where value function approximations are assumed to have finite errors. Adaptive Dynamic Programming also details another avenue of the ADP approach: policy iteration. Both basic and generalized forms of policy-iteration-based ADP are studied with complete and thorough theoretical analysis in terms of convergence, optimality, stability, and error bounds. Among continuous-time systems, the control of affine and nonaffine nonlinear systems is studied using the ADP approach which is then extended to other branches of control theory including decentralized control, robust and guaranteed cost control, and game theory. In the last part of the book the real-world significance of ADP theory is presented, focusing on three application examples developed from the authors’ work: • renewable energy scheduling for smart power grids;• coal gasification processes; and• water–gas shift reactions. Researchers studying intelligent control methods and practitioners looking to apply them in the chemical-process and power-supply industries will find much to interest them in this thorough treatment of an advanced approach to control.




Reinforcement Learning and Dynamic Programming Using Function Approximators


Book Description

From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems. However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence. Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications. The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work. Access the authors' website at www.dcsc.tudelft.nl/rlbook/ for additional material, including computer code used in the studies and information concerning new developments.




Predictive Control for Linear and Hybrid Systems


Book Description

With a simple approach that includes real-time applications and algorithms, this book covers the theory of model predictive control (MPC).




Language and Cognition


Book Description

Interaction between language and cognition remains an unsolved scientific problem. What are the differences in neural mechanisms of language and cognition? Why do children acquire language by the age of six, while taking a lifetime to acquire cognition? What is the role of language and cognition in thinking? Is abstract cognition possible without language? Is language just a communication device, or is it fundamental in developing thoughts? Why are there no animals with human thinking but without human language? Combinations even among 100 words and 100 objects (multiple words can represent multiple objects) exceed the number of all the particles in the Universe, and it seems that no amount of experience would suffice to learn these associations. How does human brain overcome this difficulty? Since the 19th century we know about involvement of Broca’s and Wernicke’s areas in language. What new knowledge of language and cognition areas has been found with fMRI and other brain imaging methods? Every year we know more about their anatomical and functional/effective connectivity. What can be inferred about mechanisms of their interaction, and about their functions in language and cognition? Why does the human brain show hemispheric (i.e., left or right) dominance for some specific linguistic and cognitive processes? Is understanding of language and cognition processed in the same brain area, or are there differences in language-semantic and cognitive-semantic brain areas? Is the syntactic process related to the structure of our conceptual world? Chomsky has suggested that language is separable from cognition. On the opposite, cognitive and construction linguistics emphasized a single mechanism of both. Neither has led to a computational theory so far. Evolutionary linguistics has emphasized evolution leading to a mechanism of language acquisition, yet proposed approaches also lead to incomputable complexity. There are some more related issues in linguistics and language education as well. Which brain regions govern phonology, lexicon, semantics, and syntax systems, as well as their acquisitions? What are the differences in acquisition of the first and second languages? Which mechanisms of cognition are involved in reading and writing? Are different writing systems affect relations between language and cognition? Are there differences in language-cognition interactions among different language groups (such as Indo-European, Chinese, Japanese, Semitic) and types (different degrees of analytic-isolating, synthetic-inflected, fused, agglutinative features)? What can be learned from sign languages? Rizzolatti and Arbib have proposed that language evolved on top of earlier mirror-neuron mechanism. Can this proposal answer the unknown questions about language and cognition? Can it explain mechanisms of language-cognition interaction? How does it relate to known brain areas and their interactions identified in brain imaging? Emotional and conceptual contents of voice sounds in animals are fused. Evolution of human language has demanded splitting of emotional and conceptual contents and mechanisms, although language prosody still carries emotional content. Is it a dying-off remnant, or is it fundamental for interaction between language and cognition? If language and cognitive mechanisms differ, unifying these two contents requires motivation, hence emotions. What are these emotions? Can they be measured? Tonal languages use pitch contours for semantic contents, are there differences in language-cognition interaction among tonal and atonal languages? Are emotional differences among cultures exclusively cultural, or also depend on languages? Interaction of language and cognition is thus full of mysteries, and we encourage papers addressing any aspect of this topic.