E-Book, Englisch, 446 Seiten
Santner / Williams / Notz The Design and Analysis of Computer Experiments
2. Auflage 2018
ISBN: 978-1-4939-8847-1
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark
E-Book, Englisch, 446 Seiten
Reihe: Springer Series in Statistics
ISBN: 978-1-4939-8847-1
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark
This book describes methods for designing and analyzing experiments that are conducted using a computer code, a computer experiment, and, when possible, a physical experiment. Computer experiments continue to increase in popularity as surrogates for and adjuncts to physical experiments. Since the publication of the first edition, there have been many methodological advances and software developments to implement these new methodologies. The computer experiments literature has emphasized the construction of algorithms for various data analysis tasks (design construction, prediction, sensitivity analysis, calibration among others), and the development of web-based repositories of designs for immediate application. While it is written at a level that is accessible to readers with Masters-level training in Statistics, the book is written in sufficient detail to be useful for practitioners and researchers. New to this revised and expanded edition:• An expanded presentation of basic material on computer experiments and Gaussian processes with additional simulations and examples • A new comparison of plug-in prediction methodologies for real-valued simulator output• An enlarged discussion of space-filling designs including Latin Hypercube designs (LHDs), near-orthogonal designs, and nonrectangular regions• A chapter length description of process-based designs for optimization, to improve good overall fit, quantile estimation, and Pareto optimization • A new chapter describing graphical and numerical sensitivity analysis tools• Substantial new material on calibration-based prediction and inference for calibration parameters • Lists of software that can be used to fit models discussed in the book to aid practitioners
?Thomas J. Santner is Professor Emeritus in the Department of Statistics at The Ohio State University. At Ohio State, he has served as department Chair and Director of the Department's Statistical Consulting Service. Previously, he was a professor in the School of Operations Research and Industrial Engineering at Cornell University. His research interests include the design and analysis of experiments, particularly those involving computer simulators, Bayesian inference, and the analysis of discrete response data. He is a Fellow of the American Statistical Association, the Institute of Mathematical Statistics, the American Association for the Advancement of Science, and is an elected ordinary member of the International Statistical Institute. He has held visiting appointments at the National Cancer Institute, the University of Washington, Ludwig Maximilians Universität (Munich, Germany), the National Institute of Statistical Science (NISS), and the Isaac Newton Institute (Cambridge, England). Brian J. Williams has been Statistician at the Los Alamos National Laboratory RAND Corporation since 2003. His research interests include experimental design, computer experiments, Bayesian inference, spatial statistics and statistical computing. Williams was named a Fellow of the American Statistical Association in 2015 and is also the recipient of the Los Alamos Achievement Award for his leadership role in the Consortium for Advanced Simulation of Light Water Reactors (CASL) Program. He holds a doctorate in statistics from The Ohio State University. William I. Notz is Professor Emeritus in the Department of Statistics at The Ohio State University. At Ohio State, he has served as acting department chair, associate dean of the College of Mathematical and Physical Sciences, and as director of the department's Statistical Consulting Service. His research focuses on experimental designs for computer experiments and he is particularly interested in sequential strategies for selecting points at which to run a computer simulator in order to optimize some performance measure related to the objectives of the computer experiment. A Fellow of the American Statistical Association, Notz has also served as Editor of the journals Technometrics and the Journal of Statistics Education.
Autoren/Hrsg.
Weitere Infos & Material
1;Preface to the Second Edition;7
2;Preface to the First Edition;9
3;Contents;11
4;1 Physical Experiments and Computer Experiments;16
4.1;1.1 Introduction;16
4.2;1.2 Examples of Computer Simulator Models;18
4.3;1.3 Some Common Types of Computer Experiments;35
4.3.1;1.3.1 Homogeneous-Input Simulators;36
4.3.2;1.3.2 Mixed-Input Simulators;37
4.3.3;1.3.3 Multiple Outputs;39
4.4;1.4 Organization of the Remainder of the Book;40
5;2 Stochastic Process Models for Describing Computer Simulator Output;42
5.1;2.1 Introduction;42
5.2;2.2 Gaussian Process Models for Real-Valued Output;45
5.2.1;2.2.1 Introduction;45
5.2.2;2.2.2 Some Correlation Functions for GP Models;49
5.2.3;2.2.3 Using the Correlation Function to Specify a GP with Given Smoothness Properties;56
5.3;2.3 Increasing the Flexibility of the GP Model;58
5.3.1;2.3.1 Hierarchical GP Models;61
5.3.2;2.3.2 Other Nonstationary Models;63
5.4;2.4 Models for Output Having Mixed Qualitative and Quantitative Inputs;64
5.5;2.5 Models for Multivariate and Functional Simulator Output;72
5.5.1;2.5.1 Introduction;72
5.5.2;2.5.2 Modeling Multiple Outputs;74
5.5.3;2.5.3 Other Constructive Models;77
5.5.4;2.5.4 Models for Simulators Having Functional Output;78
5.6;2.6 Chapter Notes;80
6;3 Empirical Best Linear Unbiased Prediction of Computer Simulator Output;82
6.1;3.1 Introduction;82
6.2;3.2 BLUP and Minimum MSPE Predictors;83
6.2.1;3.2.1 Best Linear Unbiased Predictors;83
6.2.2;3.2.2 Best MSPE Predictors;85
6.2.3;3.2.3 Some Properties of y"0362y(xte);90
6.3;3.3 Empirical Best Linear Unbiased Prediction of Univariate Simulator Output;91
6.3.1;3.3.1 Introduction;91
6.3.2;3.3.2 Maximum Likelihood EBLUPs;92
6.3.3;3.3.3 Restricted Maximum Likelihood EBLUPs;93
6.3.4;3.3.4 Cross-Validation EBLUPs;94
6.3.5;3.3.5 Posterior Mode EBLUPs;95
6.3.6;3.3.6 Examples;95
6.4;3.4 A Simulation Comparison of EBLUPs;99
6.4.1;3.4.1 Introduction;99
6.4.2;3.4.2 A Selective Review of Previous Studies;100
6.4.3;3.4.3 A Complementary Simulation Study of Prediction Accuracy and Prediction Interval Accuracy;103
6.4.3.1;3.4.3.1 Performance Measures;104
6.4.3.2;3.4.3.2 Function Test Beds;104
6.4.3.3;3.4.3.3 Prediction Simulations;106
6.4.4;3.4.4 Recommendations;110
6.5;3.5 EBLUP Prediction of Multivariate Simulator Output;110
6.5.1;3.5.1 Optimal Predictors for Multiple Outputs;111
6.5.2;3.5.2 Examples;113
6.6;3.6 Chapter Notes;122
6.6.1;3.6.1 Proof That (3.2.7) Is a BLUP;122
6.6.2;3.6.2 Derivation of Formula 3.2.8;124
6.6.3;3.6.3 Implementation Issues;124
6.6.4;3.6.4 Software for Computing EBLUPs;127
6.6.5;3.6.5 Alternatives to Kriging Metamodels and Other Topics;128
6.6.5.1;3.6.5.1 Alternatives to Kriging Metamodels;128
6.6.5.2;3.6.5.2 Testing the Covariance Structure;129
7;4 Bayesian Inference for Simulator Output;130
7.1;4.1 Introduction;130
7.2;4.2 Inference for Conjugate Bayesian Models;132
7.2.1;4.2.1 Posterior Inference for Model (4.1.1) When = ?;132
7.2.1.1;4.2.1.1 Posterior Inference About ?;134
7.2.1.2;4.2.1.2 Predictive Inference at a Single Test Input xte;134
7.2.2;4.2.2 Posterior Inference for Model (4.1.1) When = (?,?Z );138
7.3;4.3 Inference for Non-conjugate Bayesian Models;143
7.3.1;4.3.1 The Hierarchical Bayesian Model and Posterior;144
7.3.2;4.3.2 Predicting Failure Depths of Sheet Metal Pockets;147
7.4;4.4 Chapter Notes;151
7.4.1;4.4.1 Outline of the Proofs of Theorems 4.1 and 4.2;151
7.4.2;4.4.2 Eliciting Priors for Bayesian Regression;157
7.4.3;4.4.3 Alternative Sampling Algorithms;157
7.4.4;4.4.4 Software for Computing Bayesian Predictions;157
8;5 Space-Filling Designs for Computer Experiments;159
8.1;5.1 Introduction;159
8.1.1;5.1.1 Some Basic Principles of Experimental Design;159
8.1.2;5.1.2 Design Strategies for Computer Experiments;162
8.2;5.2 Designs Based on Methods for Selecting Random Samples;164
8.2.1;5.2.1 Designs Generated by Elementary Methods for Selecting Samples;165
8.2.2;5.2.2 Designs Generated by Latin Hypercube Sampling;166
8.2.3;5.2.3 Some Properties of Sampling-Based Designs;171
8.3;5.3 Latin Hypercube Designs with Additional Properties;174
8.3.1;5.3.1 Latin Hypercube Designs Whose Projections Are Space-Filling;174
8.3.2;5.3.2 Cascading, Nested, and Sliced LatinHypercube Designs;178
8.3.3;5.3.3 Orthogonal Latin Hypercube Designs;181
8.3.4;5.3.4 Symmetric Latin Hypercube Designs;184
8.4;5.4 Designs Based on Measures of Distance;186
8.5;5.5 Distance-Based Designs for Non-rectangular Regions;195
8.6;5.6 Other Space-Filling Designs;198
8.6.1;5.6.1 Designs Obtained from Quasi-Random Sequences;198
8.6.2;5.6.2 Uniform Designs;200
8.7;5.7 Chapter Notes;205
8.7.1;5.7.1 Proof That TL is Unbiased and of the Second Part of Theorem 5.1;205
8.7.2;5.7.2 The Use of LHDs in a Regression Setting;210
8.7.3;5.7.3 Other Space-Filling Designs;211
8.7.4;5.7.4 Software for Constructing Space-Filling Designs;212
8.7.5;5.7.5 Online Catalogs of Designs;214
9;6 Some Criterion-Based Experimental Designs;215
9.1;6.1 Introduction;215
9.2;6.2 Designs Based on Entropy and Mean Squared Prediction Error Criterion;216
9.2.1;6.2.1 Maximum Entropy Designs;216
9.2.2;6.2.2 Mean Squared Prediction Error Designs;220
9.3;6.3 Designs Based on Optimization Criteria;226
9.3.1;6.3.1 Introduction;226
9.3.2;6.3.2 Heuristic Global Approximation;227
9.3.3;6.3.3 Mockus Criteria Optimization;228
9.3.4;6.3.4 Expected Improvement Algorithms for Optimization;230
9.3.4.1;6.3.4.1 Schonlau and Jones Expected Improvement Algorithm;230
9.3.4.2;6.3.4.2 Picheny Expected Quantile Improvement Algorithm;236
9.3.4.3;6.3.4.3 Williams Environmental Variable Mean Optimization;237
9.3.5;6.3.5 Constrained Global Optimization;239
9.3.6;6.3.6 Pareto Optimization;243
9.3.6.1;6.3.6.1 Basic Pareto Optimization Algorithm;245
9.4;6.4 Other Improvement Criterion-Based Designs;250
9.4.1;6.4.1 Introduction;250
9.4.2;6.4.2 Contour Estimation;251
9.4.3;6.4.3 Percentile Estimation;252
9.4.3.1;6.4.3.1 Approach 1: A Confidence Interval-Based Criterion;253
9.4.3.2;6.4.3.2 Approach 2: A Hypothesis Testing-Based Criterion;254
9.4.4;6.4.4 Global Fit;255
9.5;6.5 Chapter Notes;256
9.5.1;6.5.1 The Hypervolume Indicator for Approximations to Pareto Fronts;257
9.5.2;6.5.2 Other MSPE-Based Optimal Designs;258
9.5.3;6.5.3 Software for Constructing Criterion-Based Designs;259
10;7 Sensitivity Analysis and Variable Screening;261
10.1;7.1 Introduction;261
10.2;7.2 Classical Approaches to Sensitivity Analysis;263
10.2.1;7.2.1 Sensitivity Analysis Based on Scatterplots and Correlations;263
10.2.2;7.2.2 Sensitivity Analysis Based on Regression Modeling;263
10.3;7.3 Sensitivity Analysis Based on Elementary Effects;266
10.4;7.4 Global Sensitivity Analysis;273
10.4.1;7.4.1 Main Effect and Joint Effect Functions;273
10.4.2;7.4.2 A Functional ANOVA Decomposition;278
10.4.3;7.4.3 Global Sensitivity Indices;281
10.5;7.5 Estimating Effect Plots and Global Sensitivity Indices;288
10.5.1;7.5.1 Estimating Effect Plots;289
10.5.2;7.5.2 Estimating Global Sensitivity Indices;296
10.6;7.6 Variable Selection;300
10.7;7.7 Chapter Notes;305
10.7.1;7.7.1 Designing Computer Experiments for SensitivityAnalysis;305
10.7.2;7.7.2 Orthogonality of Sobol´ Terms;306
10.7.3;7.7.3 Weight Functions g(x) with NonindependentComponents;307
10.7.4;7.7.4 Designs for Estimating Elementary Effects;308
10.7.5;7.7.5 Variable Selection;308
10.7.6;7.7.6 Global Sensitivity Indices for Functional Output;308
10.7.7;7.7.7 Software;311
11;8 Calibration;312
11.1;8.1 Introduction;312
11.2;8.2 The Kennedy and O'Hagan Calibration Model;314
11.2.1;8.2.1 Introduction;314
11.2.2;8.2.2 The KOH Model;314
11.2.2.1;8.2.2.1 Alternative Views of Calibration Parameters;317
11.3;8.3 Calibration with Univariate Data;320
11.3.1;8.3.1 Bayesian Inference for the Calibration Parameter ?;321
11.3.2;8.3.2 Bayesian Inference for the Mean Response ?(x) of the Physical System;321
11.3.3;8.3.3 Bayesian Inference for the Bias ?(x) and Calibrated Simulator E[ Ys(x,?) | Y ];322
11.4;8.4 Calibration with Functional Data;333
11.4.1;8.4.1 The Simulation Data;335
11.4.2;8.4.2 The Experimental Data;340
11.4.3;8.4.3 Joint Statistical Models and Log Likelihood Functions;347
11.4.3.1;8.4.3.1 Joint Statistical Model That Allows Simulator Discrepancy;347
11.4.3.2;8.4.3.2 Joint Statistical Model Assuming No Simulator Discrepancy;355
11.5;8.5 Bayesian Analysis;359
11.5.1;8.5.1 Prior and Posterior Distributions;359
11.5.2;8.5.2 Prediction;371
11.5.2.1;8.5.2.1 Emulation of the Simulation Output Using Only Simulator Data;374
11.5.2.2;8.5.2.2 Emulation of the Calibrated Simulator Output Modeling the Simulator Bias;377
11.5.2.3;8.5.2.3 Emulation of the Calibrated Simulation Output Assuming No Simulator Bias;383
11.6;8.6 Chapter Notes;385
11.6.1;8.6.1 Special Cases of Functional Emulation and Prediction;385
11.6.2;8.6.2 Some Other Perspectives on Emulation and Calibration;387
11.6.3;8.6.3 Software for Calibration and Validation;391
12;A List of Notation;393
12.1;A.1 Abbreviations;393
12.2;A.2 Symbols;394
13;B Mathematical Facts;397
13.1;B.1 The Multivariate Normal Distribution;397
13.2;B.2 The Gamma Distribution;399
13.3;B.3 The Beta Distribution;400
13.4;B.4 The Non-central Student t Distribution;400
13.5;B.5 Some Results from Matrix Algebra;401
14;C An Overview of Selected Optimization Algorithms;404
14.1;C.1 Newton/Quasi-Newton Algorithms;405
14.2;C.2 Direct Search Algorithms;406
14.2.1;C.2.1 Nelder–Mead Simplex Algorithm;406
14.2.2;C.2.2 Generalized Pattern Search and Surrogate Management Framework Algorithms;407
14.2.3;C.2.3 DIRECT Algorithm;409
14.3;C.3 Genetic/Evolutionary Algorithms;409
14.3.1;C.3.1 Simulated Annealing;409
14.3.2;C.3.2 Particle Swarm Optimization;410
15;D An Introduction to Markov Chain Monte Carlo Algorithms;411
16;E A Primer on Constructing Quasi-Monte Carlo Sequences;415
17;References;417
18;Author Index;435
19;Subject Index;441




