E-Book, Englisch, Band 23, 284 Seiten
Dostál Optimal Quadratic Programming Algorithms
1. Auflage 2009
ISBN: 978-0-387-84806-8
Verlag: Springer US
Format: PDF
Kopierschutz: 1 - PDF Watermark
With Applications to Variational Inequalities
E-Book, Englisch, Band 23, 284 Seiten
Reihe: Springer Optimization and Its Applications
ISBN: 978-0-387-84806-8
Verlag: Springer US
Format: PDF
Kopierschutz: 1 - PDF Watermark
Quadratic programming (QP) is one advanced mathematical technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This book presents recently developed algorithms for solving large QP problems and focuses on algorithms which are, in a sense optimal, i.e., they can solve important classes of problems at a cost proportional to the number of unknowns. For each algorithm presented, the book details its classical predecessor, describes its drawbacks, introduces modifications that improve its performance, and demonstrates these improvements through numerical experiments. This self-contained monograph can serve as an introductory text on quadratic programming for graduate students and researchers. Additionally, since the solution of many nonlinear problems can be reduced to the solution of a sequence of QP problems, it can also be used as a convenient introduction to nonlinear programming.
Autoren/Hrsg.
Weitere Infos & Material
1;Preface;7
2;Part I Background;18
2.1;Linear Algebra;19
2.1.1;Vectors;19
2.1.2;Matrices and Matrix Operations;21
2.1.3;Matrices and Mappings;22
2.1.4;Inverse and Generalized Inverse Matrices;24
2.1.5;Direct Methods for Solving Linear Equations;25
2.1.6;Norms;28
2.1.7;Scalar Products;30
2.1.8;Eigenvalues and Eigenvectors;33
2.1.9;Matrix Decompositions;35
2.1.10;Penalized Matrices;38
2.2;Optimization;43
2.2.1;Optimization Problems and Solutions;43
2.2.2;Unconstrained Quadratic Programming;44
2.2.2.1;Quadratic Cost Functions;44
2.2.2.2;Unconstrained Minimization of Quadratic Functions;45
2.2.3;Convexity;47
2.2.3.1;Convex Quadratic Functions;48
2.2.3.2;Local and Global Minimizers of Convex Function;50
2.2.3.3;Existence of Minimizers;51
2.2.3.4;Projections to Convex Sets;52
2.2.4;Equality Constrained Problems;54
2.2.4.1;Optimality Conditions;55
2.2.4.2;Existence and Uniqueness;57
2.2.4.3;KKT Systems;58
2.2.4.4;Min-max, Dual, and Saddle Point Problems;60
2.2.4.5;Sensitivity;62
2.2.4.6;Error Analysis;63
2.2.5;Inequality Constrained Problems;65
2.2.5.1;Polyhedral Sets;65
2.2.5.2;Farkas's Lemma;66
2.2.5.3;Necessary Optimality Conditions for Local Solutions;67
2.2.5.4;Existence and Uniqueness;68
2.2.5.5;Optimality Conditions for Convex Problems;70
2.2.5.6;Optimality Conditions for Bound Constrained Problems;71
2.2.5.7;Min-max, Dual, and Saddle Point Problems;71
2.2.6;Equality and Inequality Constrained Problems;73
2.2.6.1;Optimality Conditions;74
2.2.6.2;Existence and Uniqueness;75
2.2.6.3;Partially Bound and Equality Constrained Problems;75
2.2.6.4;Duality for Dependent Constraints;77
2.2.6.5;Duality for Semicoercive Problems;80
2.2.7;Linear Programming;85
2.2.7.1;Solvability and Localization of Solutions;85
2.2.7.2;Duality in Linear Programming;86
3;Part II Algorithms;87
3.1;Conjugate Gradients for Unconstrained Minimization;88
3.1.1;Conjugate Directions and Minimization;89
3.1.2;Generating Conjugate Directions and Krylov Spaces;92
3.1.3;Conjugate Gradient Method;93
3.1.4;Restarted CG and the Gradient Method;96
3.1.5;Rate of Convergence and Optimality;97
3.1.5.1;Min-max Estimate;97
3.1.5.2;Estimate in the Condition Number;99
3.1.5.3;Convergence Rate of the Gradient Method;101
3.1.5.4;Optimality;102
3.1.6;Preconditioned Conjugate Gradients;102
3.1.7;Preconditioning by Conjugate Projector;105
3.1.7.1;Conjugate Projectors;105
3.1.7.2;Minimization in Subspace;106
3.1.7.3;Conjugate Gradients in Conjugate Complement;107
3.1.7.4;Preconditioning Effect;109
3.1.8;Conjugate Gradients for More General Problems;111
3.1.9;Convergence in Presence of Rounding Errors;112
3.1.10;Numerical Experiments;113
3.1.10.1;Basic CG and Preconditioning;113
3.1.10.2;Numerical Demonstration of Optimality;114
3.1.11;Comments and Conclusions;115
3.2;Equality Constrained Minimization;117
3.2.1;Review of Alternative Methods;119
3.2.2;Penalty Method;121
3.2.2.1;Minimization of Augmented Lagrangian;122
3.2.2.2;An Optimal Feasibility Error Estimate for Homogeneous Constraints;123
3.2.2.3;Approximation Error and Convergence;125
3.2.2.4;Improved Feasibility Error Estimate;126
3.2.2.5;Improved Approximation Error Estimate;127
3.2.2.6;Preconditioning Preserving Gap in the Spectrum;129
3.2.3;Exact Augmented Lagrangian Method;130
3.2.3.1;Algorithm;131
3.2.3.2;Convergence of Lagrange Multipliers;133
3.2.3.3;Effect of the Steplength;134
3.2.3.4;Convergence of the Feasibility Error;138
3.2.3.5;Convergence of Primal Variables;138
3.2.3.6;Implementation;139
3.2.4;Asymptotically Exact Augmented Lagrangian Method;140
3.2.4.1;Algorithm;140
3.2.4.2;Auxiliary Estimates;141
3.2.4.3;Convergence Analysis;142
3.2.5;Adaptive Augmented Lagrangian Method;144
3.2.5.1;Algorithm;145
3.2.5.2;Convergence of Lagrange Multipliers for Large ;146
3.2.5.3;R-Linear Convergence for Any Initialization of ;148
3.2.6;Semimonotonic Augmented Lagrangians (SMALE);149
3.2.6.1;SMALE Algorithm;150
3.2.6.2;Relations for Augmented Lagrangians;151
3.2.6.3;Convergence and Monotonicity;153
3.2.6.4;Linear Convergence for Large 0;156
3.2.6.5;Optimality of the Outer Loop;157
3.2.6.6;Optimality of SMALE with Conjugate Gradients;159
3.2.6.7;Solution of More General Problems;161
3.2.7;Implementation of Inexact Augmented Lagrangians;162
3.2.7.1;Stopping, Modification of Constraints, and Preconditioning;162
3.2.7.2;Initialization of Constants;162
3.2.8;Numerical Experiments;164
3.2.8.1;Uzawa, Exact Augmented Lagrangians, and SMALE;164
3.2.8.2;Numerical Demonstration of Optimality;165
3.2.9;Comments and References;166
3.3;Bound Constrained Minimization;168
3.3.1;Review of Alternative Methods;170
3.3.2;KKT Conditions and Related Inequalities;171
3.3.3;The Working Set Method with Exact Solutions;173
3.3.3.1;Auxiliary Problems;173
3.3.3.2;Algorithm;174
3.3.3.3;Finite Termination;177
3.3.4;Polyak's Algorithm;178
3.3.4.1;Basic Algorithm;178
3.3.4.2;Finite Termination;179
3.3.4.3;Characteristics of Polyak's Algorithm;180
3.3.5;Inexact Polyak's Algorithm;180
3.3.5.1;Looking Ahead and Estimate;180
3.3.5.2;Looking Ahead Polyak's Algorithm;183
3.3.5.3;Easy Re-release Polyak's Algorithm;184
3.3.5.4;Properties of Modified Polyak's Algorithms;185
3.3.6;Gradient Projection Method;186
3.3.6.1;Conjugate Gradient Versus Gradient Projections;187
3.3.6.2;Contraction in the Euclidean Norm;188
3.3.6.3;The Fixed Steplength Gradient Projection Method;190
3.3.6.4;Quadratic Functions with Identity Hessian;191
3.3.6.5;Dominating Function and Decrease of the Cost Function;194
3.3.7;Modified Proportioning with Gradient Projections;197
3.3.7.1;MPGP Schema;197
3.3.7.2;Rate of Convergence;199
3.3.8;Modified Proportioning with Reduced Gradient Projections;202
3.3.8.1;MPRGP Schema;202
3.3.8.2;Rate of Convergence;203
3.3.8.3;Rate of Convergence of Projected Gradient;206
3.3.8.4;Optimality;210
3.3.8.5;Identification Lemma and Finite Termination;211
3.3.8.6;Finite Termination for Dual Degenerate Solution;214
3.3.9;Implementation of MPRGP with Optional Modifications;217
3.3.9.1;Expansion Step with Feasible Half-Step;217
3.3.9.2;MPRGP Algorithm;218
3.3.9.3;Unfeasible MPRGP;219
3.3.9.4;Choice of Parameters;221
3.3.9.5;Dynamic Release Coefficient;222
3.3.10;Preconditioning;223
3.3.10.1;Preconditioning in Face;223
3.3.10.2;Preconditioning by Conjugate Projector;225
3.3.11;Numerical Experiments;229
3.3.11.1;Polyak, MPRGP, and Preconditioned MPRGP;229
3.3.11.2;Numerical Demonstration of Optimality;230
3.3.12;Comments and References;231
3.4;Bound and Equality Constrained Minimization;234
3.4.1;Review of the Methods for Bound and Equality Constrained Problems;235
3.4.2;SMALBE Algorithm for Bound and Equality Constraints;236
3.4.2.1;KKT Conditions and Projected Gradient;236
3.4.2.2;SMALBE Algorithm;236
3.4.3;Inequalities Involving the Augmented Lagrangian;238
3.4.4;Monotonicity and Feasibility;240
3.4.5;Boundedness;242
3.4.6;Convergence;246
3.4.7;Optimality of the Outer Loop;248
3.4.8;Optimality of the Inner Loop;250
3.4.9;Solution of More General Problems;252
3.4.10;Implementation;253
3.4.11;SMALBE--M;254
3.4.12;Numerical Experiments;255
3.4.12.1;Balanced Reduction of Feasibility and Gradient Errors;255
3.4.12.2;Numerical Demonstration of Optimality;256
3.4.13;Comments and References;257
4;Part III Applications to Variational Inequalities;259
4.1;Solution of a Coercive Variational Inequality by FETI--DP Method;260
4.1.1;Model Coercive Variational Inequality;261
4.1.2;FETI--DP Domain Decomposition and Discretization;262
4.1.3;Optimality;265
4.1.4;Numerical Experiments;266
4.1.5;Comments and References;267
4.2;Solution of a Semicoercive Variational Inequality by TFETI Method;269
4.2.1;Model Semicoercive Variational Inequality;270
4.2.2;TFETI Domain Decomposition and Discretization;271
4.2.3;Natural Coarse Grid;274
4.2.4;Optimality;275
4.2.5;Numerical Experiments;277
4.2.6;Comments and References;279
4.3;References;280
4.4;Index;290




