E-Book, Englisch, 473 Seiten
Owens Iterative Learning Control
1. Auflage 2016
ISBN: 978-1-4471-6772-3
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark
An Optimization Paradigm
E-Book, Englisch, 473 Seiten
Reihe: Advances in Industrial Control
ISBN: 978-1-4471-6772-3
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark
This book develops a coherent theoretical approach to algorithm design for iterative learning control based on the use of optimization concepts. Concentrating initially on linear, discrete-time systems, the author gives the reader access to theories based on either signal or parameter optimization. Although the two approaches are shown to be related in a formal mathematical sense, the text presents them separately because their relevant algorithm design issues are distinct and give rise to different performance capabilities. Together with algorithm design, the text demonstrates that there are new algorithms that are capable of incorporating input and output constraints, enable the algorithm to reconfigure systematically in order to meet the requirements of different reference signals and also to support new algorithms for local convergence of nonlinear iterative control. Simulation and application studies are used to illustrate algorithm properties and performance in systems like gantry robots and other electromechanical and/or mechanical systems. Iterative Learning Control will interest academics and graduate students working in control who will find it a useful reference to the current status of a powerful and increasingly popular method of control. The depth of background theory and links to practical systems will be of use to engineers responsible for precision repetitive processes.
Professor Owens has 40 years of experience of Control Engineering theory and applications in areas including nuclear power, robotics and mechanical test. He has extensive teaching experience at both undergraduate and postgraduate levels in three UK universities. His research has included multivariable frequency domain theory and design, the theory of multivariable root loci, contributions to robust control theory, theoretical methods for controller design based on plant step data and involvement in aspects of adaptive control, model reduction and optimization-based design. His area of research that specifically underpins the text is his experience of modelling and analysis of systems with repetitive dynamics. Originally arising in control of underground coal cutters, my theory of 'multipass processes' (developed in 1976 with follow-on applications introduced by J.B. Edwards) laid the foundation for analysis and design in this area and others including metal rolling and automated agriculture. This work led to substantial contributions (with collaborator E. Rogers and others) in the area of repetitive control systems (as part of 2D systems theory) but more specifically, since 1996, in the area of iterative learning control when I introduced the use of optimization to the ILC community in the form of 'norm optimal iterative learning control'. Since that time he has continued to teach and research in areas related to this topic adding considerable detail and depth to the approach and introducing the ideas of parameter optimal iterative learning to simplify the implementations. This led to his development of a wide range of new algorithms, supporting analysis and applications to mechanical test. This work is also being applied to the development of data analysis tools for control in gantry robots and stroke rehabilitation equipment by collaborators at Southampton University. Work with S. Daley has also seen applications in automative test at Jaguar and related industrial sites.
David Owens was elected a Fellow of the Royal Academy of Engineering for his contributions to knowledge in these and other areas.
Autoren/Hrsg.
Weitere Infos & Material
1;Series Editors’ Foreword;7
2;Preface;10
3;Acknowledgments;17
4;Contents;19
5;1 Introduction;27
5.1;1.1 Control Systems, Models and Algorithms;28
5.2;1.2 Repetition and Iteration;29
5.2.1;1.2.1 Periodic Demand Signals;29
5.2.2;1.2.2 Repetitive Control and Multipass Systems;30
5.2.3;1.2.3 Iterative Control Examples;32
5.3;1.3 Dynamical Properties of Iteration: A Review of Ideas;35
5.4;1.4 So What Do We Need?;38
5.4.1;1.4.1 An Overview of Mathematical Techniques;39
5.4.2;1.4.2 The Conceptual Basis for Algorithms;41
5.5;1.5 Discussion and Further Background Reading;42
6;2 Mathematical Methods;44
6.1;2.1 Elements of Matrix Theory;44
6.2;2.2 Quadratic Optimization and Quadratic Forms;52
6.2.1;2.2.1 Completing the Square;52
6.2.2;2.2.2 Singular Values, Lagrangians and Matrix Norms;53
6.3;2.3 Banach Spaces, Operators, Norms and Convergent Sequences;54
6.3.1;2.3.1 Vector Spaces;54
6.3.2;2.3.2 Normed Spaces;56
6.3.3;2.3.3 Convergence, Closure, Completeness and Banach Spaces;58
6.3.4;2.3.4 Linear Operators and Dense Subsets;59
6.4;2.4 Hilbert Spaces;62
6.4.1;2.4.1 Inner Products and Norms;62
6.4.2;2.4.2 Norm and Weak Convergence;64
6.4.3;2.4.3 Adjoint and Self-adjoint Operators in Hilbert Space;66
6.5;2.5 Real Hilbert Spaces, Convex Sets and Projections;71
6.6;2.6 Optimal Control Problems in Hilbert Space;73
6.6.1;2.6.1 Proof by Completing the Square;75
6.6.2;2.6.2 Proof Using the Projection Theorem;76
6.6.3;2.6.3 Discussion;77
6.7;2.7 Further Discussion and Bibliography;78
7;3 State Space Models;80
7.1;3.1 Models of Continuous State Space Systems;82
7.1.1;3.1.1 Solution of the State Equations;83
7.1.2;3.1.2 The Convolution Operator and the Impulse Response;84
7.1.3;3.1.3 The System as an Operator Between Function Spaces;84
7.2;3.2 Laplace Transforms;85
7.3;3.3 Transfer Function Matrices, Poles, Zeros and Relative Degree;86
7.4;3.4 The System Frequency Response;88
7.5;3.5 Discrete Time, Sampled Data State Space Models;89
7.5.1;3.5.1 State Space Models as Difference Equations;89
7.5.2;3.5.2 Solution of Linear, Discrete Time State Equations;90
7.5.3;3.5.3 The Discrete Convolution Operator and the Discrete Impulse Response Sequence;91
7.6;3.6 mathcalZ-Transforms and the Discrete Transfer Function Matrix;92
7.6.1;3.6.1 Discrete Transfer Function Matrices, Poles, Zeros and the Relative Degree;93
7.6.2;3.6.2 The Discrete System Frequency Response;94
7.7;3.7 Multi-rate Discrete Time Systems;95
7.8;3.8 Controllability, Observability, Minimal Realizations and Pole Allocation;95
7.9;3.9 Inverse Systems;97
7.9.1;3.9.1 The Case of m=ell, Zeros and ?*;97
7.9.2;3.9.2 Left and Right Inverses When m neqell;99
7.10;3.10 Quadratic Optimal Control of Linear Continuous Systems;101
7.10.1;3.10.1 The Relevant Operators and Spaces;101
7.10.2;3.10.2 Computation of the Adjoint Operator;103
7.10.3;3.10.3 The Two Point Boundary Value Problem;106
7.10.4;3.10.4 The Riccati Equation and a State Feedback Plus Feedforward Representation;107
7.10.5;3.10.5 An Alternative Riccati Representation;109
7.11;3.11 Further Reading and Bibliography;110
8;4 Matrix Models, Supervectors and Discrete Systems;112
8.1;4.1 Supervectors and the Matrix Model;112
8.2;4.2 The Algebra of Series and Parallel Connections;113
8.3;4.3 The Transpose System and Time Reversal;114
8.4;4.4 Invertibility, Range and Relative Degrees;115
8.4.1;4.4.1 The Relative Degree and the Kernel and Range of G;117
8.4.2;4.4.2 The Range of G and Decoupling Theory;118
8.5;4.5 The Range and Kernel and the Use of the Inverse System;121
8.5.1;4.5.1 A Partition of the Inverse;121
8.5.2;4.5.2 Ensuring Stability of P-1(z);123
8.6;4.6 The Range, Kernel and the mathcalC* Canonical Form;124
8.6.1;4.6.1 Factorization Using State Feedback and Output Injection;124
8.6.2;4.6.2 The mathcalC* Canonical Form;125
8.6.3;4.6.3 The Special Case of Uniform Rank Systems;127
8.7;4.7 Quadratic Optimal Control of Linear Discrete Systems;129
8.7.1;4.7.1 The Adjoint and the Discrete Two Point Boundary Value Problem;130
8.7.2;4.7.2 A State Feedback/Feedforward Solution;131
8.8;4.8 Frequency Domain Relationships;132
8.8.1;4.8.1 Bounding Norms on Finite Intervals;133
8.8.2;4.8.2 Computing the Norm Using the Frequency Response;134
8.8.3;4.8.3 Quadratic Forms and Positive Real Transfer Function Matrices;135
8.8.4;4.8.4 Frequency Dependent Lower Bounds;137
8.9;4.9 Discussion and Further Reading;141
9;5 Iterative Learning Control: A Formulation;143
9.1;5.1 Abstract Formulation of a Design Problem;143
9.1.1;5.1.1 The Design Problem;144
9.1.2;5.1.2 Input and Error Update Equations: The Linear Case;147
9.1.3;5.1.3 Robustness and Uncertainty Models;148
9.2;5.2 General Conditions for Convergence of Linear Iterations;152
9.2.1;5.2.1 Spectral Radius and Norm Conditions;153
9.2.2;5.2.2 Infinite Dimensions with r(L)="026B30D L"026B30D =1 and L=L*;156
9.2.3;5.2.3 Relaxation, Convergence and Robustness;158
9.2.4;5.2.4 Eigenstructure Interpretation;162
9.2.5;5.2.5 Formal Computation of the Eigenvalues and Eigenfunctions;163
9.3;5.3 Robustness, Positivity and Inverse Systems;165
9.4;5.4 Discussion and Further Reading;167
10;6 Control Using Inverse Model Algorithms;169
10.1;6.1 Inverse Model Control: A Benchmark Algorithm;169
10.1.1;6.1.1 Use of a Right Inverse of the Plant;169
10.1.2;6.1.2 Use of a Left Inverse of the Plant;171
10.1.3;6.1.3 Why the Inverse Model Is Important;173
10.1.4;6.1.4 Inverse Model Algorithms for State Space Models;175
10.1.5;6.1.5 Robustness Tests and Multiplicative Error Models;176
10.2;6.2 Frequency Domain Robustness Criteria;180
10.2.1;6.2.1 Discrete System Robust Monotonicity Tests;180
10.2.2;6.2.2 Improving Robustness Using Relaxation;182
10.2.3;6.2.3 Discrete Systems: Robustness and Non-monotonic Convergence;183
10.3;6.3 Discussion and Further Reading;185
11;7 Monotonicity and Gradient Algorithms;188
11.1;7.1 Steepest Descent: Achieving Minimum Energy Solutions;189
11.2;7.2 Application to Discrete Time State Space Systems;191
11.2.1;7.2.1 Algorithm Construction;192
11.2.2;7.2.2 Eigenstructure Interpretation: Convergence in Finite Iterations;194
11.2.3;7.2.3 Frequency Domain Attenuation;197
11.3;7.3 Steepest Descent for Continuous Time State Space Systems;201
11.4;7.4 Monotonic Evolution Using General Gradients;203
11.5;7.5 Discrete State Space Models Revisited;206
11.5.1;7.5.1 Gradients Using the Adjoint of a State Space System;206
11.5.2;7.5.2 Why the Case of m=ell May Be Important in Design;215
11.5.3;7.5.3 Robustness Tests in the Frequency Domain;217
11.5.4;7.5.4 Robustness and Relaxation;220
11.5.5;7.5.5 Non-monotonic Gradient-Based Control and ?-Weighted Norms;221
11.5.6;7.5.6 A Steepest Descent Algorithm Using ?-Norms;226
11.6;7.6 Discussion, Comments and Further Generalizations;226
11.6.1;7.6.1 Bringing the Ideas Together?;227
11.6.2;7.6.2 Factors Influencing Achievable Performance;229
11.6.3;7.6.3 Notes on Continuous State Space Systems;230
12;8 Combined Inverse and Gradient Based Design;231
12.1;8.1 Inverse Algorithms: Robustness and Bi-directional Filtering;231
12.2;8.2 General Issues in Design;235
12.2.1;8.2.1 Pre-conditioning Control Loops;236
12.2.2;8.2.2 Compensator Structures;238
12.2.3;8.2.3 Stable Inversion Algorithms;240
12.2.4;8.2.4 All-Pass Networks and Non-minimum-phase Systems;241
12.3;8.3 Gradients, Compensation and Feedback Design Methods;248
12.3.1;8.3.1 Feedback Design: The Discrete Time Case;249
12.3.2;8.3.2 Feedback Design: The Continuous Time Case;251
12.4;8.4 Discussion and Further Reading;251
13;9 Norm Optimal Iterative Learning Control;254
13.1;9.1 Problem Formulation and Formal Algorithm;255
13.1.1;9.1.1 The Choice of Objective Function;255
13.1.2;9.1.2 Relaxed Versions of NOILC;257
13.1.3;9.1.3 NOILC for Discrete-Time State Space Systems;259
13.1.4;9.1.4 Relaxed NOILC for Discrete-Time State Space Systems;261
13.1.5;9.1.5 A Note on Frequency Attenuation: The Discrete Time Case;262
13.1.6;9.1.6 NOILC: The Case of Continuous-Time State Space Systems;263
13.1.7;9.1.7 Convergence, Eigenstructure, ?2 and Spectral Bandwidth;265
13.1.8;9.1.8 Convergence: General Properties of NOILC Algorithms;269
13.2;9.2 Robustness of NOILC: Feedforward Implementation;273
13.2.1;9.2.1 Computational Aspects of Feedforward NOILC;274
13.2.2;9.2.2 The Case of Right Multiplicative Modelling Errors;275
13.2.3;9.2.3 Discrete State Space Systems with Right Multiplicative Errors;280
13.2.4;9.2.4 The Case of Left Multiplicative Modelling Errors;283
13.2.5;9.2.5 Discrete Systems with Left Multiplicative Modelling Errors;288
13.2.6;9.2.6 Monotonicity in mathcalY with Respect to the Norm "026B30D cdot"026B30D mathcalY;289
13.3;9.3 Non-minimum-phase Properties and Flat-Lining;290
13.4;9.4 Discussion and Further Reading;293
13.4.1;9.4.1 Background Comments;293
13.4.2;9.4.2 Practical Observations;294
13.4.3;9.4.3 Performance;295
13.4.4;9.4.4 Robustness and the Inverse Algorithm;295
13.4.5;9.4.5 Alternatives?;296
13.4.6;9.4.6 Q, R and Dyadic Expansions;297
14;10 NOILC: Natural Extensions;298
14.1;10.1 Filtering Using Input and Error Weighting;298
14.2;10.2 Multi-rate Sampled Discrete Time Systems;300
14.3;10.3 Initial Conditions as Control Signals;301
14.4;10.4 Problems with Several Objectives;305
14.5;10.5 Intermediate Point Problems;307
14.5.1;10.5.1 Continuous Time Systems: An Intermediate Point Problem;307
14.5.2;10.5.2 Discrete Time Systems: An Intermediate Point Problem;311
14.5.3;10.5.3 IPNOILC: Additional Issues and Robustness;311
14.6;10.6 Multi-task NOILC;314
14.6.1;10.6.1 Continuous State Space Systems;315
14.6.2;10.6.2 Adding Initial Conditions as Controls;320
14.6.3;10.6.3 Discrete State Space Systems;321
14.7;10.7 Multi-models and Predictive NOILC;322
14.7.1;10.7.1 Predictive NOILC---General Theory and a Link to Inversion;322
14.7.2;10.7.2 A Multi-model Representation;325
14.7.3;10.7.3 The Case of Linear, State Space Models;326
14.7.4;10.7.4 Convergence and Other Algorithm Properties;329
14.7.5;10.7.5 The Special Cases of M=2 and M=infty;334
14.7.6;10.7.6 A Note on Robustness of Feedforward Predictive NOILC;336
14.8;10.8 Discussion and Further Reading;340
15;11 Iteration and Auxiliary Optimization;343
15.1;11.1 Models with Auxiliary Variables and Problem Formulation;343
15.2;11.2 A Right Inverse Model Solution;345
15.3;11.3 Solutions Using Switching Algorithms;347
15.3.1;11.3.1 Switching Algorithm Construction;347
15.3.2;11.3.2 Properties of the Switching Algorithm;348
15.3.3;11.3.3 Characterization of Convergence Rates;351
15.3.4;11.3.4 Decoupling Minimum Energy Representations from NOILC;353
15.3.5;11.3.5 Intermediate Point Tracking and the Choice G1 = G;354
15.3.6;11.3.6 Restructuring the NOILC Spectrum by Choosing G1=Ge;355
15.4;11.4 A Note on Robustness of Switching Algorithms;358
15.5;11.5 The Switching Algorithm When GeGe* Is Invertible;361
15.6;11.6 Discussion and Further Reading;364
16;12 Iteration as Successive Projection;367
16.1;12.1 Convergence Versus Proximity;367
16.2;12.2 Successive Projection and Proximity Algorithms;369
16.3;12.3 Iterative Control with Constraints;374
16.3.1;12.3.1 NOILC with Input Constraints;375
16.3.2;12.3.2 General Analysis;378
16.3.3;12.3.3 Intermediate Point Control with Input and Output Constraints;382
16.3.4;12.3.4 Iterative Control to Satisfy Auxiliary Variable Bounds;384
16.3.5;12.3.5 An Overview and Summary;386
16.4;12.4 ``Iteration Management'' by Operator Intervention;387
16.5;12.5 What Happens If S1 and S2 Do Not Intersect?;390
16.6;12.6 Discussion and Further Reading;393
17;13 Acceleration and Successive Projection;396
17.1;13.1 Replacing Plant Iterations by Off-Line Iterations;397
17.2;13.2 Accelerating Algorithms Using Extrapolation;397
17.2.1;13.2.1 Successive Projection and Extrapolation Algorithms;398
17.2.2;13.2.2 NOILC: Acceleration Using Extrapolation;400
17.3;13.3 A Notch Algorithm Using Parameterized Sets;402
17.3.1;13.3.1 Creating a Spectral Notch: Computation and Properties;402
17.3.2;13.3.2 The Notch Algorithm and Iterative Control Using Successive Projection;408
17.3.3;13.3.3 A Notch Algorithm for Discrete State Space Systems;412
17.3.4;13.3.4 Robustness of the Notch Algorithm in Feedforward Form;415
17.4;13.4 Discussion and Further Reading;420
18;14 Parameter Optimal Iterative Control;422
18.1;14.1 Parameterizations and Norm Optimal Iteration;422
18.2;14.2 Parameter Optimal Control: The Single Parameter Case;427
18.2.1;14.2.1 Alternative Objective Functions;427
18.2.2;14.2.2 Problem Definition and Convergence Characterization;429
18.2.3;14.2.3 Convergence Properties: Dependence on Parameters;432
18.2.4;14.2.4 Choosing the Compensator;434
18.2.5;14.2.5 Computing tr[?0* ?0]: Discrete State Space Systems;435
18.2.6;14.2.6 Choosing Parameters in J(?);437
18.2.7;14.2.7 Iteration Dynamics;439
18.2.8;14.2.8 Plateauing/Flatlining Phenomena;439
18.2.9;14.2.9 Switching Algorithms;444
18.3;14.3 Robustness of POILC: The Single Parameter Case;448
18.3.1;14.3.1 Robustness Using the Right Inverse;448
18.3.2;14.3.2 Robustness: A More General Case;450
18.4;14.4 Multi-Parameter Learning Control;452
18.4.1;14.4.1 The Form of the Parameterization;452
18.4.2;14.4.2 Alternative Forms for ?? and the Objective Function;453
18.4.3;14.4.3 The Multi-parameter POILC Algorithm;456
18.4.4;14.4.4 Choice of Multi-parameter Parameterization;458
18.5;14.5 Discussion and Further Reading;460
18.5.1;14.5.1 Chapter Overview;460
18.5.2;14.5.2 High Order POILC: A Brief Summary;462
19;References;463
20;Index;468




