Nuttin / Greenwald | Reward and Punishment in Human Learning | E-Book | sack.de
E-Book

E-Book, Englisch, 216 Seiten, Web PDF

Nuttin / Greenwald Reward and Punishment in Human Learning

Elements of a Behavior Theory
1. Auflage 2014
ISBN: 978-1-4832-2226-4
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark

Elements of a Behavior Theory

E-Book, Englisch, 216 Seiten, Web PDF

ISBN: 978-1-4832-2226-4
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark



Reward and Punishment in Human Learning: Elements of a Behavior Theory provides a different approach to the study of reward and punishment, emphasizing what is learned when a response is rewarded and how does this differ from what is learned when a response is punished. This book discusses the distortions in impressions of success, accuracy in recall of reward and punishment, and determinants of outcome-recall. The role of open-task attitudes in motor learning, effects of isolated punishments, and structural isolation in the closed-task situation are also elaborated. This publication is intended for psychologists, but is also helpful to teachers, executives, prison officials, psychotherapists, and parents.

Nuttin / Greenwald Reward and Punishment in Human Learning jetzt bestellen!

Weitere Infos & Material


1;Front Cover;1
2;Computational Statistics with R;4
3;Copyright;5
4;Contents;6
5;Contributors;14
6;Preface;16
6.1;Chapter 1: Introduction to R;17
6.2;Chapter 2: R Graphics;17
6.3;Chapter 3: Graphics Miscellanea;17
6.4;Chapter 4: Matrix Algebra Topics in Statistics and Economics Using R;17
6.5;Chapter 5: Sample Size Calculations with R: Level 1;17
6.6;Chapter 6: Sample Size Calculations with R: Level 2;17
6.7;Chapter 7: Binomial Regression in R;18
6.8;Chapter 8: Computing Tolerance Intervals and Regions Using R;18
6.9;Chapter 9: Modeling the Probability of Second Cancer in Controlled Clinical Trials;18
6.10;Chapter 10: Bayesian Networks;18
6.11;References;18
7;Chapter 1: Introduction to R;20
7.1;1. Introduction;20
7.2;2. Setting Up R;23
7.2.1;2.1. Installing and Starting R;23
7.2.2;2.2. Memory;29
7.2.3;2.3. Saving Your Code and Workspace;30
7.2.4;2.4. R Packages;32
7.3;3. Basic R Objects and Commands;33
7.3.1;3.1. Numbers, Character Strings, and Logicals;33
7.3.2;3.2. Scalars, Vectors, Matrices, and Arrays;34
7.3.3;3.3. Data Frames and Lists;36
7.3.4;3.4. Strings and Factors;37
7.4;4. Writing Programs;38
7.4.1;4.1. Conditional Statements;38
7.4.2;4.2. if/else Statements;39
7.4.3;4.3. for Loops;39
7.4.4;4.4. while Loops;40
7.4.5;4.5. Functions;42
7.4.6;4.6. Debugging and Efficiency;46
7.5;5. Input and Output;50
7.6;6. Data Processing;51
7.7;7. Exploratory Data Analysis;54
7.8;8. Statistical Inference and Modeling;55
7.8.1;8.1. Hypothesis Testing;55
7.8.2;8.2. Regression;56
7.9;9. Simulation;61
7.10;10. Numerical Techniques;64
7.11;11. Annotated References;66
7.11.1;Set Up;66
7.11.2;Text Editors;66
7.11.3;Introductory Resources and Books;67
8;Chapter 2: R Graphics;68
8.1;1. Introduction;68
8.1.1;1.1. Origins;68
8.1.2;1.2. Principles of Data Graphics;70
8.2;2. Traditional Graphics;70
8.2.1;2.1. The plot( ) Function;73
8.2.2;2.2. Other Common High-Level Functions;75
8.2.3;2.3. Visualizations for Time Series Data;81
8.2.4;2.4. Customizing Plots Using Low-Level Functions;82
8.2.5;2.5. Limitations of Traditional Graphics;85
8.3;3. Grid Graphics;88
8.3.1;3.1. Viewports;89
8.3.2;3.2. Units and Primitives;89
8.3.3;3.3. First Attempt;90
8.4;4. Lattice;93
8.4.1;4.1. Overview;93
8.4.2;4.2. Common High-Level Functions;97
8.4.3;4.3. Bar Charts and Dot Plots for Tabular Data;98
8.4.4;4.4. Scatterplots and Custom Displays;102
8.4.5;4.5. The ``trellis´´ Object;103
8.5;5. ggplot;104
8.6;6. Further Reading;107
8.7;References;110
9;Chapter 3: Graphics Miscellanea;112
9.1;1. Introduction;112
9.2;2. The Plot(
) Command;112
9.2.1;2.1. Features that Can Be Included in a Scatter Plot;113
9.2.1.1;2.1.1. par(
) Command;113
9.3;3. Scatter Plots;115
9.3.1;3.1. Regression Analysis with Scatter Plots;115
9.3.2;3.2. Multiple Regression Analysis with Scatterplot Matrices;121
9.3.3;3.3. Scatterplot Matrices of Data Segregated by a Categorical Variable;124
9.4;4. Time Series Plots;125
9.4.1;4.1. Three Graphs in a Single Frame;126
9.4.2;4.2. Two Different Time Series Data Sets in a Single Plot;128
9.5;5. Pie Charts;130
9.6;6. Special Box Plots;132
9.7;7. xy Plots;135
9.8;8. Curves;137
9.9;9. LOWESS;141
9.10;10. Sunflower Plots;144
9.11;11. Violin Plots;146
9.12;12. Bean Plots;148
9.13;13. Bubble Charts;149
9.14;14. 3D Surface Plot;149
9.15;15. Chernoff Faces-Graphical Presentation of Multivariate Data;152
9.16;16. Maps;156
9.16.1;16.1. Drawing Common Maps;156
9.16.2;16.2. Creating a Choropleth Map;158
9.16.2.1;16.2.1. Creating Maps with Custom Colors Depending on Values;159
9.17;References;161
10;Chapter 4: Matrix Algebra Topics in Statistics and Economics Using R;162
10.1;1. Introduction;162
10.2;2. Basic Matrix Manipulations in R;163
10.3;3. Descriptive Statistics;165
10.3.1;3.1. Outlier Detection and Normality Tests;167
10.3.2;3.2. Multivariate Normality Tests;167
10.4;4. Matrix Transformations, Invariance, and Equivariance;167
10.4.1;Affine Transformations Defined;168
10.4.2;Desirable Invariance and Equivariance;168
10.4.3;4.1. Data Standardization;168
10.4.4;4.2. Limitations of the Usual Standardization;170
10.4.5;4.3. Mahalanobis Distance and Outlier Detection;172
10.5;5. Payoff Matrices in Decision Analysis;173
10.6;6. Matrix Algebra in Regression Models;175
10.6.1;6.1. Matrix QR Decomposition;176
10.6.2;6.2. Collinearity and Singular Value Decomposition;177
10.6.2.1;6.2.1. PCR and SVD;177
10.6.2.2;6.2.2. Ridge Regression;178
10.6.3;6.3. Heteroscedastic and Autocorrelated Errors;178
10.7;7. Correlation Matrices and Generalizations;179
10.7.1;Bounds on the Cross-Correlation;179
10.7.2;7.1. New Asymmetric Generalized Correlation Matrix;180
10.8;8. Matrices for Population Dynamics;184
10.9;9. Multivariate Components Analysis;187
10.9.1;9.1. Projection Matrix: Generalized Canonical Correlations;187
10.9.2;9.2. Invariant Coordinate Selection;188
10.10;10. Sparse Matrices;191
10.11;References;194
11;Chapter 5: Sample Size Calculations with R: Level 1;196
11.1;1. Introduction;196
11.1.1;1.1. Goals;197
11.1.2;1.2. Why Did We Choose R?;197
11.2;2. General Ideas on Sample Size Calculations;197
11.2.1;2.1. Example;198
11.2.2;2.2. FAQ and Pointers;199
11.2.3;2.3. Signal-to-Noise Ratio;200
11.2.4;2.4. Some Features of the Normal Distribution;200
11.3;3. Single-Sample Problems;203
11.3.1;3.1. Quantitative;203
11.3.2;3.2. Testing of Hypotheses Environment;203
11.3.3;3.3. Specifications;204
11.3.4;3.4. Formula for Sample Size;205
11.3.5;3.5. Comments;209
11.3.6;3.6. The Other Type of One-Sided Alternative;209
11.3.7;3.7. The Case of Two-Sided Alternative;209
11.3.8;3.8. Comments;213
11.3.9;3.9. One-Sided Alternative;213
11.3.10;3.10. Two-Sided Alternative;213
11.3.11;3.11. The Case When the Population Standard Deviation s Is Unknown;213
11.3.12;3.12. The Case of One-Sided Alternative;213
11.3.13;3.13. Specifications;214
11.3.14;3.14. Comments;217
11.3.15;3.15. One-Sample Problem: One-Sided Alternative: s Is Known;217
11.3.16;3.16. One-Sample Problem: One-Sided Alternative: s Is Unknown;217
11.3.17;3.17. R Code;218
11.3.18;3.18. One-Sample Problem;220
11.3.18.1;3.18.1. Estimation Environment;220
11.3.18.2;3.18.2. Error of Interval Estimation;220
11.3.19;3.19. Specifications;221
11.3.20;3.20. Example;221
11.3.21;3.21. An Alternative Approach;221
11.3.21.1;3.21.1. Error of Estimation;221
11.3.22;3.22. Example;221
11.3.22.1;3.22.1. Error of Estimation;222
11.3.22.2;3.22.2. Specificatioins;222
11.3.22.3;3.22.3. Illustration;222
11.3.23;3.23. Specifications;223
11.4;4. Two-Sample Problems: Quantitative Responses;223
11.4.1;4.1. Scenario 1;224
11.4.2;4.2. Specifications;224
11.4.3;4.3. Scenario 2;225
11.4.4;4.4. One-Sided Alternative;225
11.4.5;4.5. Specifications;225
11.4.6;4.6. Scenario 3;226
11.4.7;4.7. One-Sided Alternative;226
11.4.8;4.8. Specifications;226
11.4.9;4.9. Illustration;226
11.4.10;4.10. Two-Sided Alternative;227
11.4.11;4.11. Specifications;227
11.4.12;4.12. An Illustration;227
11.4.13;4.13. Scenario 4;230
11.4.14;4.14. Estimation Perspective;230
11.4.15;4.15. Scenario 1;230
11.4.16;4.16. Specifications;230
11.4.17;4.17. Example;231
11.4.18;4.18. Scenario 2;231
11.4.19;4.19. Specifications;231
11.4.20;4.20. Example;232
11.4.21;4.21. Scenario 3;232
11.4.22;4.22. Paired t-Test;232
11.4.22.1;4.22.1. Setup;232
11.4.22.2;4.22.2. Structure of the Data;233
11.4.23;4.23. Specifications;233
11.5;5. Multisample Problem-Quantitative Responses-Analysis of Variance;234
11.5.1;5.1. Specifications;234
11.5.2;5.2. Examples;235
11.5.3;5.3. Structure of the Data;235
11.5.4;5.4. Specifications;236
11.5.5;5.5. Specifications;236
11.5.6;5.6. Some Guidelines from the Social Sciences and Psychology;237
11.5.7;5.7. Comments;239
11.6;References;239
12;Chapter 6: Sample Size Calculations with R: Level 2;240
12.1;1. Single Proportions;240
12.1.1;1.1. Problem;240
12.2;2. Two-Sample Proportions;251
12.2.1;2.1. Traditional Test;252
12.2.2;2.2. Arcsine Square Root Transformation;253
12.3;3. Effect Sizes;256
12.3.1;3.1. The Case of Proportions;256
12.3.2;3.2. The Case of t-Test;256
12.3.3;3.3. The Case of Correlation;257
12.3.4;3.4. Analysis of Variance;257
12.4;4. Multisample Proportions;258
12.4.1;4.1. Testing Equality of Several Population Proportions;258
12.5;5. McNemar Test;261
12.6;6. Correlations;263
12.7;7. Hazard Ratio in Survival Analysis;266
12.7.1;7.1. A Pilot Study;268
12.8;8. Multiple Regression;270
12.9;References;274
13;Chapter 7: Binomial Regression in R;276
13.1;1. Binomial Regression in the Generalized Linear Model;277
13.2;2. Standard Logistic Regression;278
13.3;3. Assumptions Involved in the Standard Logistic Regression Model;280
13.4;4. Residuals;280
13.4.1;4.1. Interpreting Residuals;282
13.4.2;4.2. Influential Points;285
13.5;5. Overdispersion;287
13.5.1;5.1. Estimation Using Quasilikelihood;288
13.5.2;5.2. Adding Explanatory Terms to the Model;290
13.6;6. Hypothesis Testing and Inference;292
13.7;7. Model Performance;294
13.7.1;7.1. ROC Curves/Sensitivity/Specificity/Accuracy;294
13.7.2;7.2. Area Under the Curve;296
13.7.3;7.3. Selecting a Cut Point;299
13.8;8. Modeling Repeated (Longitudinal) Binary Measures;300
13.8.1;8.1. Generalized Estimating Equations;301
13.8.2;8.2. Generalized Linear Mixed Models;304
13.9;9. Model Selection;308
13.9.1;9.1. Penalized Logistic Regression: The glmnet Package;311
13.9.2;9.2. Phoneme Data;312
13.9.3;9.3. Fitting glmnet Models;312
13.9.4;9.4. Visualizing the glmnet Model;313
13.9.5;9.5. Choosing . in glmnet Using Cross-Validation;315
13.10;10. Machine Learning Methods;318
13.10.1;10.1. Splitting the Data in Train and Test Samples;318
13.10.2;10.2. Recursive Partitioning (rpart);319
13.10.3;10.3. Random Forests;319
13.10.4;10.4. Generalized Boosted Regression Modeling;321
13.10.5;10.5. Comparison of Results;322
13.11;11. Concluding Remarks;324
13.12;References;325
14;Chapter 8: Computing Tolerance Intervals and Regions Using R;328
14.1;1. Introduction;328
14.1.1;1.1. Formal Definition;329
14.2;2. Tolerance Intervals for Continuous Distributions;330
14.2.1;2.1. Tolerance Intervals for the Normal Distribution;330
14.2.2;2.2. Tolerance Intervals for the Exponential Distribution;334
14.2.3;2.3. Tolerance Intervals for the Weibull Distribution;335
14.3;3. Tolerance Intervals for Discrete Distributions;336
14.3.1;3.1. Tolerance Intervals for the Binomial Distribution;337
14.3.2;3.2. Tolerance Intervals for the Poisson Distribution;338
14.3.3;3.3. Tolerance Intervals for the Negative Binomial Distribution;339
14.4;4. Nonparametric Tolerance Intervals;340
14.5;5. Regression Tolerance Intervals;343
14.5.1;5.1. Linear Regression Tolerance Intervals;343
14.5.2;5.2. Nonlinear Regression Tolerance Intervals;346
14.5.3;5.3. Nonparametric Regression Tolerance Intervals;347
14.6;6. Multivariate Tolerance Regions;350
14.7;7. Final Remarks;353
14.8;References;355
15;Chapter 9: Modeling the Probability of Second Cancer in Controlled Clinical Trials;358
15.1;1. Introduction;358
15.2;2. Difficulties in Second Cancer Research;359
15.3;3. Current Knowledge of Second Malignancy;359
15.4;4. Clinical Trial Database;361
15.4.1;4.1. Laboratory Test Data Analysis;362
15.4.2;4.2. Medical History and Concomitant Medicines;364
15.4.3;4.3. Efficacy Data Consideration;366
15.5;5. Integrated Analysis;366
15.6;6. Assessing Model Adequacy;372
15.7;7. Summary;374
15.8;References;375
16;Chapter 10: Bayesian Networks;376
16.1;1. Introduction;376
16.2;2. Joint and Conditional Distributions;377
16.3;3. Generalities and Issues;381
16.4;4. Graph Theory;384
16.5;5. A Case Study;386
16.5.1;Model Selection;391
16.5.1.1;A Naïve Approach;391
16.6;6. Network Model Fitting;397
16.7;7. Learning Algorithm;402
16.8;References;404
17;Subject Index;406


Chapter 2 R Graphics
Deepayan Sarkar1    Theoretical Statistics and Mathematics Unit, Indian Statistical Institute, New Delhi, India
1 Corresponding author: email address: deepayan.sarkar@gmail.com Abstract
Graphics is an important component of data analysis workflows, especially in interactive systems such as R. This chapter gives an overview of R's static graphics facilities. In addition to commonly used functions, we describe the underlying plotting model and how it can be exploited to customize default output. We also give a brief overview of the relatively recent Grid graphics, and the lattice and ggplot2 packages which use it to implement general purpose high-level systems. Keywords Graphics Grid Trellis Lattice ggplot2 Grammar of graphics 1 Introduction
R has a reputation for good graphics. Much of this reputation is based on its ability to produce publication-quality statistical displays that are static in nature. At the same time, new users who perhaps have experience with other graphics systems find the behavior of R graphics surprising and are often disappointed by the apparent lack of any sort of “dynamism.” Both perceptions are true to some extent. R has an excellent (standard) graphics system that can do many things well, but is limited in other ways. As with other aspects of R, it is useful to have a sense of how the system came to be in its current state, and the standard workflow the designers expected to be followed, in order to use it effectively. It is also important to be aware of alternative graphics systems, providing various degrees of integration with R, that try to address many of the limitations of the standard system. This chapter deals entirely with the standard static graphics system, and provides pointers to some alternatives. 1.1 Origins
As with the language itself, the history of the R graphics model goes back to S. The S language was from the very beginning designed to be interactive, and graphics was naturally an essential component of the system. The model that the designers of S chose to adopt for graphics was the GRZ model already in use at the Bell Laboratories. This may be described as a “painter's model,” where a graphic was built out of a small set of primitives such as line segments, polygons, text, etc., and later elements were drawn on top of earlier ones. There was no provision for deleting an element once it was drawn, except to start a completely new graphic. This model allowed both input and output to be abstracted. Graphics functions that were meant for users would internally call these primitives. For output, the primitives could be implemented differently depending on the target “device,” which could be Postscript or PDF files for printing, hardware devices such as pen plotters, or on-screen devices for interactive viewing. The device-specific implementations of the primitives are known as device drivers, and new drivers are even now written to support new kinds of output formats. See ? Devices for more details. The painter's model naturally led to a mental approach that viewed a plot as a work in progress, always with the possibility of adding something more to it. This attitude pervades much of the graphics functionality written for S in its early days and is still popular due to its simplicity, familiarity, and the availability of a large variety of graphic designs implemented using this model. We will loosely refer to this model of graphics as the traditional graphics model. In the 1990s, S introduced a new approach to graphics, which was called Trellis graphics, meant to substitute the traditional graphics tools rather than to complement it. The most prominent feature of Trellis graphics was the notion of conditioning (also known as “small multiples” or “faceting”) which allowed subsets of data to be intelligently visualized so as to enable effective comparison between those subsets. The way this was done necessitated a departure from the “work-in-progress” model and required the user to completely specify the details of the plot in one go. This was not really a departure from the painter's model in terms of implementation, but rather a change in the mental approach to plotting. R followed S in implementing the traditional graphics model first. When it came to Trellis graphics, it took a slightly different approach that had important repercussions. Instead of implementing Trellis graphics using the tools provided by the traditional model, as S had done, the R developers first introduced a layer of abstraction called Grid graphics. Grid was designed to be a low-level tool, which provided graphical elements as objects that could be manipulated to a considerable extent, and sophisticated viewport and layout capabilities to use these elements to construct complicated plots. Grid was used to implement two important general high-level graphics packages, as well as many other specialized packages. One was lattice (Sarkar, 2008), which implemented the functionality of Trellis graphics in R. The other was ggplot2 (Wickham, 2009), which adapted ideas from Wilkinson (1999) to provide another alternative graphics system. Both systems will be discussed later in this chapter. 1.2 Principles of Data Graphics
A discussion of why graphics are important in statistical analysis and what makes good graphics good is beyond the scope of this chapter, and we refer the reader to Wickham (2013) for an excellent introduction and further references. If there is one single underlying principle, it is that good graphics should enable comparison. Starting from this principle, Cleveland and McGill (1984) performed a series of perceptual experiments that indicated, for instance, that the human eye can judge the difference in positions along a common axis better than it can judge differences in lengths of line segments, and that it is worse than either in judging quantitative differences based on a color scale. Based on such experiments, Cleveland (1985) discusses principles that when used systematically should yield effective visualizations, and these principles played an important role in the evolution of S graphics. The same principle of enabling comparison is also the basis of Trellis graphics, which grew out of ideas in Cleveland (1993). Overall, the work of Cleveland, and before him, Tukey (1977), has been extremely influential in shaping the direction and visual feel of traditional S and R graphics. 2 Traditional Graphics
The core of the traditional R graphics system is the suite of functions available in the graphics package, with various add-on packages providing further functionality. The full list of functions can be seen, as with any other package, using > library(help = graphics) These functions can be roughly categorized into two groups, high-level and low-level functions. High-level functions are those that are intended to produce a complete plot by themselves. Low-level functions are those that are intended to add elements to existing plots. Of course, high-level functions are themselves built up from low-level functions. Let us look at an example. The simplest and most common type of statistical plot is the scatterplot, which depicts bivariate numeric data as points in a Cartesian coordinate system. The high-level function that produces scatterplots is plot() (although that is not all plot() does). We use R's built-in dataset, anscombe, for illustration. The dataset contains Anscombe's well-known quartet of bivariate datasets (Anscombe, 1973; Tufte, 2001) that are quite different from each other, yet have the same traditional statistical summaries (mean, variance, correlation, least squares regression line, etc.). The first dataset can be plotted as follows, producing Fig. 1. Figure 1 Scatterplot of Anscombe's first dataset. > plot(x = anscombe$x1, y = anscombe$y1) It will be helpful if we pause here for a moment to understand how this plot could have been created using low-level functions. The plot consists of the points, the box surrounding the plot, the axes, and the axis labels. All these elements can be suppressed by plot() as follows: > plot(x = anscombe$x1, y = anscombe$y1,+ type = "n", axes = FALSE, xlab = "", ylab = "") This produces a completely blank page, but performs one important task: it sets up the coordinate system for subsequent low-level calls. The extent of this coordinate system can be obtained using > par("usr") [1] 3.6000 14.4000 3.9968 11.1032 and is precisely the range of the data that was supplied to plot(), with a padding of 4% on both sides (this can be overridden by specifying the xlim and ylim arguments). > range(anscombe$x1) [1] 4 14 >...



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.