Dynamics and Optimal Control of Road Vehicles


Book Description

Dynamics and Optimal Control of Road Vehicles uniquely offers a unified treatment of tyre, car and motorcycle dynamics, and the application of nonlinear optimal control to vehicle-related problems within a single book. This is a comprehensive and accessible text that emphasises the theoretical aspects of vehicular modelling and control. The book focuses on two major elements. The first is classical mechanics and its use in building vehicle and tyre dynamics models. The second focus is nonlinear optimal control, which is used to solve a range of minimum-time and minimum-fuel, as well as track curvature reconstruction problems. As is known classically, all of this material is bound together by the calculus of variations and stationary principles. The treatment of this material is supplemented with a number of examples that were designed to highlight obscurities and subtleties in the theory.




Optimal Control Theory with Applications in Economics


Book Description

A rigorous introduction to optimal control theory, with an emphasis on applications in economics. This book bridges optimal control theory and economics, discussing ordinary differential equations, optimal control, game theory, and mechanism design in one volume. Technically rigorous and largely self-contained, it provides an introduction to the use of optimal control theory for deterministic continuous-time systems in economics. The theory of ordinary differential equations (ODEs) is the backbone of the theory developed in the book, and chapter 2 offers a detailed review of basic concepts in the theory of ODEs, including the solution of systems of linear ODEs, state-space analysis, potential functions, and stability analysis. Following this, the book covers the main results of optimal control theory, in particular necessary and sufficient optimality conditions; game theory, with an emphasis on differential games; and the application of control-theoretic concepts to the design of economic mechanisms. Appendixes provide a mathematical review and full solutions to all end-of-chapter problems. The material is presented at three levels: single-person decision making; games, in which a group of decision makers interact strategically; and mechanism design, which is concerned with a designer's creation of an environment in which players interact to maximize the designer's objective. The book focuses on applications; the problems are an integral part of the text. It is intended for use as a textbook or reference for graduate students, teachers, and researchers interested in applications of control theory beyond its classical use in economic growth. The book will also appeal to readers interested in a modeling approach to certain practical problems involving dynamic continuous-time models.




Calculus of Variations and Optimal Control Theory


Book Description

This textbook offers a concise yet rigorous introduction to calculus of variations and optimal control theory, and is a self-contained resource for graduate students in engineering, applied mathematics, and related subjects. Designed specifically for a one-semester course, the book begins with calculus of variations, preparing the ground for optimal control. It then gives a complete proof of the maximum principle and covers key topics such as the Hamilton-Jacobi-Bellman theory of dynamic programming and linear-quadratic optimal control. Calculus of Variations and Optimal Control Theory also traces the historical development of the subject and features numerous exercises, notes and references at the end of each chapter, and suggestions for further study. Offers a concise yet rigorous introduction Requires limited background in control theory or advanced mathematics Provides a complete proof of the maximum principle Uses consistent notation in the exposition of classical and modern topics Traces the historical development of the subject Solutions manual (available only to teachers) Leading universities that have adopted this book include: University of Illinois at Urbana-Champaign ECE 553: Optimum Control Systems Georgia Institute of Technology ECE 6553: Optimal Control and Optimization University of Pennsylvania ESE 680: Optimal Control Theory University of Notre Dame EE 60565: Optimal Control




Mathematical Optimization Theory and Operations Research: Recent Trends


Book Description

This book constitutes refereed proceedings of the 21st International Conference on Mathematical Optimization Theory and Operations Research, MOTOR 2022, held in Petrozavodsk, Russia, in July 2022. The 21 full papers and 3 short papers presented in this volume were carefully reviewed and selected from a total of 88 submissions. The papers in the volume are organised according to the following topical headings: ​invited talks; integer programming and combinatorial optimization; mathematical programming; game theory and optimal control; operational research applications.




Advances in Dynamics of Vehicles on Roads and Tracks


Book Description

This book gathers together papers presented at the 26th IAVSD Symposium on Dynamics of Vehicles on Roads and Tracks, held on August 12 – 16, 2019, at the Lindholmen Conference Centre in Gothenburg, Sweden. It covers cutting-edge issues related to vehicle systems, including vehicle design, condition monitoring, wheel and rail contact, automated driving systems, suspension and ride analysis, and many more topics. Written by researchers and practitioners, the book offers a timely reference guide to the field of vehicle systems dynamics, and a source of inspiration for future research and collaborations.




On Motion Planning Using Numerical Optimal Control


Book Description

During the last decades, motion planning for autonomous systems has become an important area of research. The high interest is not the least due to the development of systems such as self-driving cars, unmanned aerial vehicles and robotic manipulators. In this thesis, the objective is not only to find feasible solutions to a motion planning problem, but solutions that also optimize some kind of performance measure. From a control perspective, the resulting problem is an instance of an optimal control problem. In this thesis, the focus is to further develop optimal control algorithms such that they be can used to obtain improved solutions to motion planning problems. This is achieved by combining ideas from automatic control, numerical optimization and robotics. First, a systematic approach for computing local solutions to motion planning problems in challenging environments is presented. The solutions are computed by combining homotopy methods and numerical optimal control techniques. The general principle is to define a homotopy that transforms, or preferably relaxes, the original problem to an easily solved problem. The approach is demonstrated in motion planning problems in 2D and 3D environments, where the presented method outperforms both a state-of-the-art numerical optimal control method based on standard initialization strategies and a state-of-the-art optimizing sampling-based planner based on random sampling. Second, a framework for automatically generating motion primitives for lattice-based motion planners is proposed. Given a family of systems, the user only needs to specify which principle types of motions that are relevant for the considered system family. Based on the selected principle motions and a selected system instance, the algorithm not only automatically optimizes the motions connecting pre-defined boundary conditions, but also simultaneously optimizes the terminal state constraints as well. In addition to handling static a priori known system parameters such as platform dimensions, the framework also allows for fast automatic re-optimization of motion primitives if the system parameters change while the system is in use. Furthermore, the proposed framework is extended to also allow for an optimization of discretization parameters, that are are used by the lattice-based motion planner to define a state-space discretization. This enables an optimized selection of these parameters for a specific system instance. Finally, a unified optimization-based path planning approach to efficiently compute locally optimal solutions to advanced path planning problems is presented. The main idea is to combine the strengths of sampling-based path planners and numerical optimal control. The lattice-based path planner is applied to the problem in a first step using a discretized search space, where system dynamics and objective function are chosen to coincide with those used in a second numerical optimal control step. This novel tight combination of a sampling-based path planner and numerical optimal control makes, in a structured way, benefit of the former method’s ability to solve combinatorial parts of the problem and the latter method’s ability to obtain locally optimal solutions not constrained to a discretized search space. The proposed approach is shown in several practically relevant path planning problems to provide improvements in terms of computation time, numerical reliability, and objective function value.







Optimal Control Theory for Applications


Book Description

The published material represents the outgrowth of teaching analytical optimization to aerospace engineering graduate students. To make the material available to the widest audience, the prerequisites are limited to calculus and differential equations. It is also a book about the mathematical aspects of optimal control theory. It was developed in an engineering environment from material learned by the author while applying it to the solution of engineering problems. One goal of the book is to help engineering graduate students learn the fundamentals which are needed to apply the methods to engineering problems. The examples are from geometry and elementary dynamical systems so that they can be understood by all engineering students. Another goal of this text is to unify optimization by using the differential of calculus to create the Taylor series expansions needed to derive the optimality conditions of optimal control theory.




Informatics in Control, Automation and Robotics


Book Description

The book focuses the latest endeavours relating researches and developments conducted in fields of Control, Robotics and Automation. Through more than twenty revised and extended articles, the present book aims to provide the most up-to-date state-of-art of the aforementioned fields allowing researcher, PhD students and engineers not only updating their knowledge but also benefiting from the source of inspiration that represents the set of selected articles of the book. The deliberate intention of editors to cover as well theoretical facets of those fields as their practical accomplishments and implementations offers the benefit of gathering in a same volume a factual and well-balanced prospect of nowadays research in those topics. A special attention toward “Intelligent Robots and Control” may characterize another benefit of this book.