Nonlinear Conjugate Gradient Methods for Unconstrained Optimization


Book Description

Two approaches are known for solving large-scale unconstrained optimization problems—the limited-memory quasi-Newton method (truncated Newton method) and the conjugate gradient method. This is the first book to detail conjugate gradient methods, showing their properties and convergence characteristics as well as their performance in solving large-scale unconstrained optimization problems and applications. Comparisons to the limited-memory and truncated Newton methods are also discussed. Topics studied in detail include: linear conjugate gradient methods, standard conjugate gradient methods, acceleration of conjugate gradient methods, hybrid, modifications of the standard scheme, memoryless BFGS preconditioned, and three-term. Other conjugate gradient methods with clustering the eigenvalues or with the minimization of the condition number of the iteration matrix, are also treated. For each method, the convergence analysis, the computational performances and the comparisons versus other conjugate gradient methods are given. The theory behind the conjugate gradient algorithms presented as a methodology is developed with a clear, rigorous, and friendly exposition; the reader will gain an understanding of their properties and their convergence and will learn to develop and prove the convergence of his/her own methods. Numerous numerical studies are supplied with comparisons and comments on the behavior of conjugate gradient algorithms for solving a collection of 800 unconstrained optimization problems of different structures and complexities with the number of variables in the range [1000,10000]. The book is addressed to all those interested in developing and using new advanced techniques for solving unconstrained optimization complex problems. Mathematical programming researchers, theoreticians and practitioners in operations research, practitioners in engineering and industry researchers, as well as graduate students in mathematics, Ph.D. and master students in mathematical programming, will find plenty of information and practical applications for solving large-scale unconstrained optimization problems and applications by conjugate gradient methods.




Nonlinear Optimization with Engineering Applications


Book Description

This textbook examines a broad range of problems in science and engineering, describing key numerical methods applied to real life. The case studies presented are in such areas as data fitting, vehicle route planning and optimal control, scheduling and resource allocation, sensitivity calculations and worst-case analysis. Chapters are self-contained with exercises provided at the end of most sections. Nonlinear Optimization with Engineering Applications is ideal for self-study and classroom use in engineering courses at the senior undergraduate or graduate level. The book will also appeal to postdocs and advanced researchers interested in the development and use of optimization algorithms.




The Sequential Unconstrained Minimization Technique for Nonlinear Programming. Algorithm Ii. Optimum Gradients by Fibonacci Search


Book Description

The algorithm has been revised to incorporate a more efficient technique for computing the minimum of a function of a specified vector, a computation required in each iteration of the optimumgradient method. The new technique is an adaptation of a Fibonacci-gradient method. The new technique is an adaptation of a Fibonacci previously used and results in a recuction in total problem solution time of almost one half. A new normalized final-convergence criterion that does not depend on the magnitude of the optimum solution value is given. The detailed computer solution of a change-constrained linear programming problem illustrates the typical convergence characteristics of the method. The remainder of the paper is a concise and simplified review of all the method's important computational aspects. (Author).




Nonlinear Optimization


Book Description

Optimization is one of the most important areas of modern applied mathematics, with applications in fields from engineering and economics to finance, statistics, management science, and medicine. While many books have addressed its various aspects, Nonlinear Optimization is the first comprehensive treatment that will allow graduate students and researchers to understand its modern ideas, principles, and methods within a reasonable time, but without sacrificing mathematical precision. Andrzej Ruszczynski, a leading expert in the optimization of nonlinear stochastic systems, integrates the theory and the methods of nonlinear optimization in a unified, clear, and mathematically rigorous fashion, with detailed and easy-to-follow proofs illustrated by numerous examples and figures. The book covers convex analysis, the theory of optimality conditions, duality theory, and numerical methods for solving unconstrained and constrained optimization problems. It addresses not only classical material but also modern topics such as optimality conditions and numerical methods for problems involving nondifferentiable functions, semidefinite programming, metric regularity and stability theory of set-constrained systems, and sensitivity analysis of optimization problems. Based on a decade's worth of notes the author compiled in successfully teaching the subject, this book will help readers to understand the mathematical foundations of the modern theory and methods of nonlinear optimization and to analyze new problems, develop optimality theory for them, and choose or construct numerical solution methods. It is a must for anyone seriously interested in optimization.




Conjugate Gradient Algorithms in Nonconvex Optimization


Book Description

This book details algorithms for large-scale unconstrained and bound constrained optimization. It shows optimization techniques from a conjugate gradient algorithm perspective as well as methods of shortest residuals, which have been developed by the author.




Nonlinear Conjugate Gradient Methods for Unconstrained Optimization


Book Description

Two approaches are known for solving large-scale unconstrained optimization problems—the limited-memory quasi-Newton method (truncated Newton method) and the conjugate gradient method. This is the first book to detail conjugate gradient methods, showing their properties and convergence characteristics as well as their performance in solving large-scale unconstrained optimization problems and applications. Comparisons to the limited-memory and truncated Newton methods are also discussed. Topics studied in detail include: linear conjugate gradient methods, standard conjugate gradient methods, acceleration of conjugate gradient methods, hybrid, modifications of the standard scheme, memoryless BFGS preconditioned, and three-term. Other conjugate gradient methods with clustering the eigenvalues or with the minimization of the condition number of the iteration matrix, are also treated. For each method, the convergence analysis, the computational performances and the comparisons versus other conjugate gradient methods are given. The theory behind the conjugate gradient algorithms presented as a methodology is developed with a clear, rigorous, and friendly exposition; the reader will gain an understanding of their properties and their convergence and will learn to develop and prove the convergence of his/her own methods. Numerous numerical studies are supplied with comparisons and comments on the behavior of conjugate gradient algorithms for solving a collection of 800 unconstrained optimization problems of different structures and complexities with the number of variables in the range [1000,10000]. The book is addressed to all those interested in developing and using new advanced techniques for solving unconstrained optimization complex problems. Mathematical programming researchers, theoreticians and practitioners in operations research, practitioners in engineering and industry researchers, as well as graduate students in mathematics, Ph.D. and master students in mathematical programming, will find plenty of information and practical applications for solving large-scale unconstrained optimization problems and applications by conjugate gradient methods.




Modern Numerical Nonlinear Optimization


Book Description

This book includes a thorough theoretical and computational analysis of unconstrained and constrained optimization algorithms and combines and integrates the most recent techniques and advanced computational linear algebra methods. Nonlinear optimization methods and techniques have reached their maturity and an abundance of optimization algorithms are available for which both the convergence properties and the numerical performances are known. This clear, friendly, and rigorous exposition discusses the theory behind the nonlinear optimization algorithms for understanding their properties and their convergence, enabling the reader to prove the convergence of his/her own algorithms. It covers cases and computational performances of the most known modern nonlinear optimization algorithms that solve collections of unconstrained and constrained optimization test problems with different structures, complexities, as well as those with large-scale real applications. The book is addressed to all those interested in developing and using new advanced techniques for solving large-scale unconstrained or constrained complex optimization problems. Mathematical programming researchers, theoreticians and practitioners in operations research, practitioners in engineering and industry researchers, as well as graduate students in mathematics, Ph.D. and master in mathematical programming will find plenty of recent information and practical approaches for solving real large-scale optimization problems and applications.




Introduction to Methods for Nonlinear Optimization


Book Description

This book has two main objectives: • to provide a concise introduction to nonlinear optimization methods, which can be used as a textbook at a graduate or upper undergraduate level; • to collect and organize selected important topics on optimization algorithms, not easily found in textbooks, which can provide material for advanced courses or can serve as a reference text for self-study and research. The basic material on unconstrained and constrained optimization is organized into two blocks of chapters: • basic theory and optimality conditions • unconstrained and constrained algorithms. These topics are treated in short chapters that contain the most important results in theory and algorithms, in a way that, in the authors’ experience, is suitable for introductory courses. A third block of chapters addresses methods that are of increasing interest for solving difficult optimization problems. Difficulty can be typically due to the high nonlinearity of the objective function, ill-conditioning of the Hessian matrix, lack of information on first-order derivatives, the need to solve large-scale problems. In the book various key subjects are addressed, including: exact penalty functions and exact augmented Lagrangian functions, non monotone methods, decomposition algorithms, derivative free methods for nonlinear equations and optimization problems. The appendices at the end of the book offer a review of the essential mathematical background, including an introduction to convex analysis that can make part of an introductory course.




Advances in Nonlinear Programming


Book Description

About 60 scientists and students attended the 96' International Conference on Nonlinear Programming, which was held September 2-5 at Institute of Compu tational Mathematics and Scientific/Engineering Computing (ICMSEC), Chi nese Academy of Sciences, Beijing, China. 25 participants were from outside China and 35 from China. The conference was to celebrate the 60's birthday of Professor M.J.D. Powell (Fellow of Royal Society, University of Cambridge) for his many contributions to nonlinear optimization. On behalf of the Chinese Academy of Sciences, vice president Professor Zhi hong Xu attended the opening ceremony of the conference to express his warm welcome to all the participants. After the opening ceremony, Professor M.J.D. Powell gave the keynote lecture "The use of band matrices for second derivative approximations in trust region methods". 13 other invited lectures on recent advances of nonlinear programming were given during the four day meeting: "Primal-dual methods for nonconvex optimization" by M. H. Wright (SIAM President, Bell Labs), "Interior point trajectories in semidefinite programming" by D. Goldfarb (Columbia University, Editor-in-Chief for Series A of Mathe matical Programming), "An approach to derivative free optimization" by A.