Primal-dual Interior-Point Methods


Book Description

In the past decade, primal-dual algorithms have emerged as the most important and useful algorithms from the interior-point class. This book presents the major primal-dual algorithms for linear programming in straightforward terms. A thorough description of the theoretical properties of these methods is given, as are a discussion of practical and computational aspects and a summary of current software. This is an excellent, timely, and well-written work. The major primal-dual algorithms covered in this book are path-following algorithms (short- and long-step, predictor-corrector), potential-reduction algorithms, and infeasible-interior-point algorithms. A unified treatment of superlinear convergence, finite termination, and detection of infeasible problems is presented. Issues relevant to practical implementation are also discussed, including sparse linear algebra and a complete specification of Mehrotra's predictor-corrector algorithm. Also treated are extensions of primal-dual algorithms to more general problems such as monotone complementarity, semidefinite programming, and general convex programming problems.




Interior-point Polynomial Algorithms in Convex Programming


Book Description

Specialists working in the areas of optimization, mathematical programming, or control theory will find this book invaluable for studying interior-point methods for linear and quadratic programming, polynomial-time methods for nonlinear convex programming, and efficient computational methods for control problems and variational inequalities. A background in linear algebra and mathematical programming is necessary to understand the book. The detailed proofs and lack of "numerical examples" might suggest that the book is of limited value to the reader interested in the practical aspects of convex optimization, but nothing could be further from the truth. An entire chapter is devoted to potential reduction methods precisely because of their great efficiency in practice.




A Mathematical View of Interior-point Methods in Convex Optimization


Book Description

Here is a book devoted to well-structured and thus efficiently solvable convex optimization problems, with emphasis on conic quadratic and semidefinite programming. The authors present the basic theory underlying these problems as well as their numerous applications in engineering, including synthesis of filters, Lyapunov stability analysis, and structural design. The authors also discuss the complexity issues and provide an overview of the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming. The book's focus on well-structured convex problems in conic form allows for unified theoretical and algorithmical treatment of a wide spectrum of important optimization problems arising in applications.




Primal-Dual Interior-Point Methods


Book Description

Presents the major primal-dual algorithms for linear programming. A thorough, straightforward description of the theoretical properties of these methods.




Progress in Mathematical Programming


Book Description

The starting point of this volume was a conference entitled "Progress in Mathematical Programming," held at the Asilomar Conference Center in Pacific Grove, California, March 1-4, 1987. The main topic of the conference was developments in the theory and practice of linear programming since Karmarkar's algorithm. There were thirty presentations and approximately fifty people attended. Presentations included new algorithms, new analyses of algorithms, reports on computational experience, and some other topics related to the practice of mathematical programming. Interestingly, most of the progress reported at the conference was on the theoretical side. Several new polynomial algorithms for linear program ming were presented (Barnes-Chopra-Jensen, Goldfarb-Mehrotra, Gonzaga, Kojima-Mizuno-Yoshise, Renegar, Todd, Vaidya, and Ye). Other algorithms presented were by Betke-Gritzmann, Blum, Gill-Murray-Saunders-Wright, Nazareth, Vial, and Zikan-Cottle. Efforts in the theoretical analysis of algo rithms were also reported (Anstreicher, Bayer-Lagarias, Imai, Lagarias, Megiddo-Shub, Lagarias, Smale, and Vanderbei). Computational experiences were reported by Lustig, Tomlin, Todd, Tone, Ye, and Zikan-Cottle. Of special interest, although not in the main direction discussed at the conference, was the report by Rinaldi on the practical solution of some large traveling salesman problems. At the time of the conference, it was still not clear whether the new algorithms developed since Karmarkar's algorithm would replace the simplex method in practice. Alan Hoffman presented results on conditions under which linear programming problems can be solved by greedy algorithms."




On Primal-dual Interior-point Algorithms for Convex Optimisation


Book Description

This thesis studies the theory and implementation of interior-point methods for convex optimisation. A number of important problems from mathematics and engineering can be cast naturally as convex optimisation problems, and a great many others have useful convex relaxations. Interior-point methods are among the successful algorithms for solving convex optimisation problems. One class of interior-point methods, called primal-dual interior-point methods, have been particularly successful at solving optimisation problems defined over symmetric cones, which are self-dual cones whose linear automorphisms act transitively on their interiors. The main theoretical contribution is the design and analysis of a primal-dual interior-point method for general convex optimisation that is ``primal-dual symmetric''--if arithmetic is done exactly, the sequence of iterates generated is invariant under interchange of primal and dual problems. The proof of this algorithm's correctness and asymptotic worst-case iteration complexity hinges on a new analysis of a certain rank-four update formula akin to the Hessian estimate updates performed by quasi-Newton methods. This thesis also gives simple, explicit constructions of primal-dual scalings--linear maps from the dual space to the primal space that map the dual iterate to the primal iterate and the barrier gradient at the primal iterate to the barrier gradient at the dual iterate--by averaging the primal or dual Hessian over a line segment. These scalings are called the primal and dual integral scalings in this thesis. The primal and dual integral scalings can inherit certain kinds of good behaviour from the barrier whose Hessian is averaged. For instance, if the primal barrier Hessian at every point maps the primal cone into the dual cone, then the primal integral scaling also maps the primal cone into the dual cone. This gives the idea that primal-dual interior-point methods based on the primal integral scaling might be effective on problems in which the primal barrier is somehow well-behaved, but the dual barrier is not. One such class of problems is \emph{hyperbolicity cone optimisation}--minimising a linear function over the intersection of an affine space with a so-called hyperbolicity cone. Hyperbolicity cones arise from hyperbolic polynomials, which can be seen as a generalisation of the determinant polynomial on symmetric matrices. Hyperbolic polynomials themselves have been of considerable recent interest in mathematics, their theory playing a role in the resolution of the Kadison-Singer problem. In the setting of hyperbolicity cone optimisation, the primal barrier's Hessian satisfies ``the long-step Hessian estimation property'' with which the primal barrier may be easily estimated everywhere in the interior of the cone in terms of the primal barrier anywhere else in the interior of the cone, and the primal barrier Hessian at every point in the interior of the cone maps the primal cone into the dual cone. In general, however, the dual barrier satisfies neither of these properties. This thesis also describes an adaptation of the Mizuno-Todd-Ye method for linear optimisation to hyperbolicity cone optimisation and its implementation. This implementation is meant as a window into the algorithm's convergence behaviour on hyperbolicity cone optimisation problems rather than as a useful software package for solving hyperbolicity cone optimisation problems that might arise in practice. In the final chapter of this thesis is a description of an implementation of an interior-point method for linear optimisation. This implementation can efficiently use primal-dual scalings based on rank-four updates to an old scaling matrix and was meant as a platform to evaluate that technique. This implementation is modestly slower than CPLEX's barrier optimiser on problems with no free or double-bounded variables. A computational comparison between the ``standard'' interior-point algorithm for solving LPs with one instance of the rank-four update technique is given. The rank-four update formula mentioned above has an interesting specialisation to linear optimisation that is also described in this thesis. A serious effort was made to improve the running time of an interior-point method for linear optimisation using this technique, but it ultimately failed. This thesis revisits work from the early 1990s by Rothberg and Gupta on cache-efficient data structures for Cholesky factorisation. This thesis proposes a variant of their data structure, showing that, in this variant, the time needed to perform triangular solves can be reduced substantially from the time needed by either the usual supernodal or simplicial data structures. The linear optimisation problem solver described in this thesis is also used to study the impact of these different data structures on the overall time required to solve a linear optimisation problem.




Self-Regularity


Book Description

Research on interior-point methods (IPMs) has dominated the field of mathematical programming for the last two decades. Two contrasting approaches in the analysis and implementation of IPMs are the so-called small-update and large-update methods, although, until now, there has been a notorious gap between the theory and practical performance of these two strategies. This book comes close to bridging that gap, presenting a new framework for the theory of primal-dual IPMs based on the notion of the self-regularity of a function. The authors deal with linear optimization, nonlinear complementarity problems, semidefinite optimization, and second-order conic optimization problems. The framework also covers large classes of linear complementarity problems and convex optimization. The algorithm considered can be interpreted as a path-following method or a potential reduction method. Starting from a primal-dual strictly feasible point, the algorithm chooses a search direction defined by some Newton-type system derived from the self-regular proximity. The iterate is then updated, with the iterates staying in a certain neighborhood of the central path until an approximate solution to the problem is found. By extensively exploring some intriguing properties of self-regular functions, the authors establish that the complexity of large-update IPMs can come arbitrarily close to the best known iteration bounds of IPMs. Researchers and postgraduate students in all areas of linear and nonlinear optimization will find this book an important and invaluable aid to their work.







Interior Point Algorithms


Book Description

The first comprehensive review of the theory and practice of one oftoday's most powerful optimization techniques. The explosive growth of research into and development of interiorpoint algorithms over the past two decades has significantlyimproved the complexity of linear programming and yielded some oftoday's most sophisticated computing techniques. This book offers acomprehensive and thorough treatment of the theory, analysis, andimplementation of this powerful computational tool. Interior Point Algorithms provides detailed coverage of all basicand advanced aspects of the subject. Beginning with an overview offundamental mathematical procedures, Professor Yinyu Ye movesswiftly on to in-depth explorations of numerous computationalproblems and the algorithms that have been developed to solve them.An indispensable text/reference for students and researchers inapplied mathematics, computer science, operations research,management science, and engineering, Interior Point Algorithms: * Derives various complexity results for linear and convexprogramming * Emphasizes interior point geometry and potential theory * Covers state-of-the-art results for extension, implementation,and other cutting-edge computational techniques * Explores the hottest new research topics, including nonlinearprogramming and nonconvex optimization.