E-Book, Englisch, 256 Seiten
Reihe: Chapman & Hall/CRC Monographs on Statistics & Applied Probability
E-Book, Englisch, 256 Seiten
Reihe: Chapman & Hall/CRC Monographs on Statistics & Applied Probability
ISBN: 978-1-4200-3593-3
Verlag: Taylor & Francis
Format: PDF
Kopierschutz: Adobe DRM (»Systemvoraussetzungen)
New in the Second Edition:
- A separate chapter on Bayesian methods
- Complete revision of the chapter on estimation
- A major example from the field of near infrared spectroscopy
- More emphasis on cross-validation
- Greater focus on bootstrapping
- Stochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible
- Software available on the Internet for implementing many of the algorithms presented
- More examples
Subset Selection in Regression, Second Edition remains dedicated to the techniques for fitting and choosing models that are linear in their parameters and to understanding and correcting the bias introduced by selecting a model that fits only slightly better than others. The presentation is clear, concise, and belongs on the shelf of anyone researching, using, or teaching subset selecting techniques.
Zielgruppe
Data analysts and applied statisticians, working in biology, medicine, economics, environmental science, social science, and the physical sciences
Autoren/Hrsg.
Fachgebiete
Weitere Infos & Material
OBJECTIVES
Prediction, Explanation, Elimination or What?
How Many Variables in the Prediction Formula?
Alternatives to Using Subsets
'Black Box' Use of Best-Subsets Techniques
LEAST-SQUARES COMPUTATIONS
Using Sums of Squares and Products Matrices
Orthogonal Reduction Methods
Gauss-Jordan v. Orthogonal Reduction Methods
Interpretation of Projections
Appendix A: Operation Counts for All-Subsets Regression
FINDING SUBSETS WHICH FIT WELL
Objectives and Limitations of this Chapter
Forward Selection
Efroymson's Algorithm
Backward Elimination
Sequential Replacement Algorithm
Replacing Two Variables at a Time
Generating All Subsets
Using Branch-and-Bound Techniques
Grouping Variables
Ridge Regression and Other Alternatives
The Non-Negative Garrote and the Lasso
Some Examples
Conclusions and Recommendations
HYPOTHESIS TESTING
Is There any Information in the Remaining Variables?
Is One Subset Better than Another?
Appendix A: Spjftvoll's Method - Detailed Description
WHEN TO STOP?
What Criterion Should We Use?
Prediction Criteria
Cross-Validation and the PRESS Statistic
Bootstrapping
Likelihood and Information-Based Stopping Rules
Appendix A. Approximate Equivalence of Stopping Rules
ESTIMATION OF REGRESSION COEFFICIENTS
Selection Bias
Choice Between Two Variables
Selection Bias in the General Case, and its Reduction
Conditional Likelihood Estimation
Estimation of Population Means
Estimating Least-Squares Projections
Appendix A: Changing Projections to Equate Sums of Squares
BAYESIAN METHODS
Bayesian Introduction
'Spike and Slab' Prior
Normal prior for Regression Coefficients
Model Averaging
Picking the Best Model
CONCLUSIONS AND SOME RECOMMENDATIONS
REFERENCES
INDEX