E-Book, Englisch, 270 Seiten
Reihe: Springer Texts in Statistics
Hoff A First Course in Bayesian Statistical Methods
1. Auflage 2009
ISBN: 978-0-387-92407-6
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark
E-Book, Englisch, 270 Seiten
Reihe: Springer Texts in Statistics
ISBN: 978-0-387-92407-6
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark
This book provides a compact self-contained introduction to the theory and application of Bayesian statistical methods. The book is accessible to readers having a basic familiarity with probability, yet allows more advanced readers to quickly grasp the principles underlying Bayesian theory and methods. The examples and computer code allow the reader to understand and implement basic Bayesian data analyses using standard statistical models and to extend the standard models to specialized data analysis situations. The book begins with fundamental notions such as probability, exchangeability and Bayes' rule, and ends with modern topics such as variable selection in regression, generalized linear mixed effects models, and semiparametric copula estimation. Numerous examples from the social, biological and physical sciences show how to implement these methodologies in practice. Monte Carlo summaries of posterior distributions play an important role in Bayesian data analysis. The open-source R statistical computing environment provides sufficient functionality to make Monte Carlo estimation very easy for a large number of statistical models and example R-code is provided throughout the text. Much of the example code can be run ``as is'' in R, and essentially all of it can be run after downloading the relevant datasets from the companion website for this book. Peter Hoff is an Associate Professor of Statistics and Biostatistics at the University of Washington. He has developed a variety of Bayesian methods for multivariate data, including covariance and copula estimation, cluster analysis, mixture modeling and social network analysis. He is on the editorial board of the Annals of Applied Statistics.
Autoren/Hrsg.
Weitere Infos & Material
1;Preface;5
2;Contents;7
3;Introduction and examples;10
3.1;Introduction;10
3.2;Why Bayes?;11
3.2.1;Estimating the probability of a rare event;12
3.2.2;Building a predictive model;17
3.3;Where we are going;20
3.4;Discussion and further references;21
4;Belief, probability and exchangeability;22
4.1;Belief functions and probabilities;22
4.2;Events, partitions and Bayes' rule;23
4.3;Independence;26
4.4;Random variables;26
4.4.1;Discrete random variables;27
4.4.2;Continuous random variables;28
4.4.3;Descriptions of distributions;30
4.5;Joint distributions;32
4.6;Independent random variables;35
4.7;Exchangeability;36
4.8;de Finetti's theorem;38
4.9;Discussion and further references;39
5;One-parameter models;40
5.1;The binomial model;40
5.1.1;Inference for exchangeable binary data;44
5.1.2;Confidence regions;50
5.2;The Poisson model;52
5.2.1;Posterior inference;54
5.2.2;Example: Birth rates;57
5.3;Exponential families and conjugate priors;60
5.4;Discussion and further references;61
6;Monte Carlo approximation;62
6.1;The Monte Carlo method;62
6.2;Posterior inference for arbitrary functions;66
6.3;Sampling from predictive distributions;69
6.4;Posterior predictive model checking;71
6.5;Discussion and further references;74
7;The normal model;75
7.1;The normal model;75
7.2;Inference for the mean, conditional on the variance;77
7.3;Joint inference for the mean and variance;81
7.4;Bias, variance and mean squared error;87
7.5;Prior specification based on expectations;91
7.6;The normal model for non-normal data;92
7.7;Discussion and further references;94
8;Posterior approximation with the Gibbs sampler;96
8.1;A semiconjugate prior distribution;96
8.2;Discrete approximations;97
8.3;Sampling from the conditional distributions;99
8.4;Gibbs sampling;100
8.5;General properties of the Gibbs sampler;103
8.6;Introduction to MCMC diagnostics;105
8.7;Discussion and further references;111
9;The multivariate normal model;112
9.1;The multivariate normal density;112
9.2;A semiconjugate prior distribution for the mean;114
9.3;The inverse-Wishart distribution;116
9.4;Gibbs sampling of the mean and covariance;119
9.5;Missing data and imputation;122
9.6;Discussion and further references;130
10;Group comparisons and hierarchical modeling;131
10.1;Comparing two groups;131
10.2;Comparing multiple groups;136
10.2.1;Exchangeability and hierarchical models;137
10.3;The hierarchical normal model;138
10.3.1;Posterior inference;139
10.4;Example: Math scores in U.S. public schools;141
10.4.1;Prior distributions and posterior approximation;143
10.4.2;Posterior summaries and shrinkage;146
10.5;Hierarchical modeling of means and variances;149
10.5.1;Analysis of math score data;151
10.6;Discussion and further references;152
11;Linear regression;154
11.1;The linear regression model;154
11.1.1;Least squares estimation for the oxygen uptake data;158
11.2;Bayesian estimation for a regression model;159
11.2.1;A semiconjugate prior distribution;159
11.2.2;Default and weakly informative prior distributions;160
11.3;Model selection;165
11.3.1;Bayesian model comparison;168
11.3.2;Gibbs sampling and model averaging;172
11.4;Discussion and further references;175
12;Nonconjugate priors and Metropolis-Hastings algorithms;176
12.1;Generalized linear models;176
12.2;The Metropolis algorithm;178
12.3;The Metropolis algorithm for Poisson regression;184
12.4;Metropolis, Metropolis-Hastings and Gibbs;186
12.4.1;The Metropolis-Hastings algorithm;187
12.4.2;Why does the Metropolis-Hastings algorithm work?;189
12.5;Combining the Metropolis and Gibbs algorithms;192
12.5.1;A regression model with correlated errors;193
12.5.2;Analysis of the ice core data;196
12.6;Discussion and further references;197
13;Linear and generalized linear mixed effects models;199
13.1;A hierarchical regression model;199
13.2;Full conditional distributions;202
13.3;Posterior analysis of the math score data;204
13.4;Generalized linear mixed effects models;205
13.4.1;A Metropolis-Gibbs algorithm for posterior approximation;206
13.4.2;Analysis of tumor location data;207
13.5;Discussion and further references;211
14;Latent variable methods for ordinal data;212
14.1;Ordered probit regression and the rank likelihood;212
14.1.1;Probit regression;214
14.1.2;Transformation models and the rank likelihood;217
14.2;The Gaussian copula model;220
14.2.1;Rank likelihood for copula estimation;221
14.3;Discussion and further references;226
15;Exercises;227
16;Common distributions;254
17;References;260
18;Index;267




