Elliott / Kettler / Beddow | Handbook of Accessible Achievement Tests for All Students | E-Book | www2.sack.de
E-Book

E-Book, Englisch, 342 Seiten

Elliott / Kettler / Beddow Handbook of Accessible Achievement Tests for All Students

Bridging the Gaps Between Research, Practice, and Policy
1. Auflage 2011
ISBN: 978-1-4419-9356-4
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark

Bridging the Gaps Between Research, Practice, and Policy

E-Book, Englisch, 342 Seiten

ISBN: 978-1-4419-9356-4
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark



The Handbook of Accessible Achievement Tests for All Students: Bridging the Gaps Between Research, Practice, and Policy presents a wealth of evidence-based solutions designed to move the assessment field beyond 'universal' standards and policies toward practices that enhance learning and testing outcomes. Drawing on an extensive research and theoretical base as well as emerging areas of interest, the volume focuses on major policy concerns, instructional considerations, and test design issues, including:The IEP team's role in sound assessment.The relationships among opportunity to learn, assessment, and learning outcomes.Innovations in computerized testing and the '6D' framework for standard setting.Legal issues in the assessment of special populations.Guidelines for linguistically accessible assessments.Evidence-based methods for making item modifications that increase the validity of inferences from test scores.Strategies for writing clearer test items.Methods for including student input in assessment design.Suggestions for better measurement and tests that are more inclusive.This Handbook is an essential reference for researchers, practitioners, and graduate students in education and allied disciplines, including child and school psychology, social work, special education, learning and measurement, and education policy.

Stephen N. Elliott, PhD is the founding Director of the Learning Sciences Institute, a trans-university research enterprise at Arizona State University, and is the Mickelson Foundation Professor of Education. He received his doctorate at Arizona State University in 1980 and has been on the faculty at several major research universities, including the University of Wisconsin-Madison and Vanderbilt University. At Wisconsin (1987-2004), Steve was a professor of educational psychology and served as the Associate Director of the Wisconsin Center for Education Research. At Vanderbilt (2004-2010), he was the Dunn Family Professor of Educational and Psychological Assessment in the Special Education Department and directed the Learning Sciences Institute and Dunn Family Scholars Program. His research focuses on scale development and educational assessment practices. In particular, he has published articles on (a) the assessment of children's social skills and academic competence, (b) the use of testing accommodations and alternate assessment methods for evaluating the academic performance of students with disabilities for educational accountability, and (c) students' opportunities to learn the intended curriculum. Steve's scholarly and professional contributions have been recognized by his colleagues in education and psychology research as evidenced by being selected as an American Psychological Association Senior Scientist in 2009. Steve consults with state assessment leaders on the assessment and instruction of PreK-12 students, and serves on ETS's Visiting Research Panel, and is the Director of Research and Scientific Practice for the Society of the Study of School Psychology. Ryan J. Kettler, PhD is a Research Assistant Professor in Special Education at Peabody College of Vanderbilt University. He received his doctorate in Educational Psychology, with a specialization in School Psychology, from the University of Wisconsin-Madison in 2005. Ryan's dissertation, Identifying students who need help early: Validation of the Brief Academic Competence Evaluation Screening System, won the 2006 Outstanding Dissertation award from the Wisconsin School Psychologists Association. In 2007, he was named an Early Career Scholar by the Society for the Study of School Psychology. Prior to joining Vanderbilt University, Ryan was an assistant professor at California State University, Los Angeles, and completed an APA-accredited internship at Ethan Allen School in Wales, Wisconsin. He has worked on multiple federally funded grants examining the effectiveness of alternate assessments, academic and behavioral screening systems, and testing accommodations. Ryan is the author of peer reviewed publications and presentations within the broader area of data-based assessment for intervention, representing specific interests in academic and behavioral screening, inclusive assessment, reliability and validity issues, and rating scale technology. He currently serves as a consultant to College Board and to the Wisconsin Center for Education Research, providing expertise in the area of inclusive assessment.Peter A. Beddow, PhD received his doctorate in Special Education and Educational Psychology at Vanderbilt University in 2011. His research focuses on test accessibility and item-writing for assessments of student achievement. He is the senior author of the Test Accessibility and Modification Inventory (TAMI) and the Accessibility Rating Matrix, a set of tools for evaluating the accessibility of test items for learners with a broad range of abilities and needs. Based on his work on accessibility theory, Peter was awarded the Bonsal Education Research Entrepreneurship Award in 2009 and the Melvyn R. Semmel Dissertation Research Award in 2010. Prior to beginning his academic career, Peter taught for seven years in Los Angeles County, including five years teaching Special Education for students with emotional and behavior problems at Five Acres School, part of a residential treatment facility for children who are wards-of-the-court for reasons of abuse and neglect. Peter's primary goal is to help children realize their infinite value and achieve their ultimate potential. Pete lives in Nashville, Tennessee.Alexander Kurz, MEd is a doctoral student in Special Education and the Interdisciplinary Program in Educational Psychology at Vanderbilt University. He has studied in Germany and the U.S. earning degrees in Special Education and Philosophy. Upon moving to the U.S., he worked as a special education teacher in Tennessee and California, designed and implemented curricula for reading intervention classes, and participated in school reform activities through the Bill and Melinda Gates Foundation. Prior to beginning his doctoral studies, Alex worked as behavior analyst for children with autism and as an educational consultant to Discovery Education Assessment. During his graduate work at Vanderbilt, he collaborated with the Wisconsin Center for Education Research and Discovery Education Assessment leading research efforts to examine curricular alignment and its relation to student achievement for students with and without disabilities. Alex has coauthored several peer-reviewed publications on alignment and alternate assessment. His latest scholarly contributions have reexamined the concepts of opportunity-to-learn (OTL), alignment, and access to the general curriculum in the context of curricular frameworks for general and special education. Alex is the senior author of My Instructional Learning Opportunities Guidance System, a teacher-oriented OTL measurement tool. His current interest in educational technology and innovation is aimed at identifying and creating pragmatic solutions to the problems of practice.

Elliott / Kettler / Beddow Handbook of Accessible Achievement Tests for All Students jetzt bestellen!

Weitere Infos & Material


1;Preface;6
2;Contents;7
3;Contributors;9
4;About the Editors;11
5;1 Creating Access to Instruction and Tests of Achievement: Challenges and Solutions;13
5.1;Legislative Context and Key Concepts;14
5.2;Providing Access to Overcome Barriers;17
5.2.1;Access via Opportunity to Learn;17
5.2.2;Access via Testing Accommodations;20
5.2.3;Access via Well-Designed Test Items;22
5.3;Actions and Innovations Needed to Keep Moving Forward;23
5.4;Conclusions;24
5.5;References;25
6;Part I Government Policies and Legal Considerations;29
7;2 U.S. Policies Supporting Inclusive Assessments for Students with Disabilities;30
7.1;U.S. Policies Supporting Inclusive Assessment of Students with Disabilities;30
7.2;Assessment Policies in the 1960s and 1970s: Inclusion and 'Equal Terms';30
7.3;The 1980s and 1990s: IEP as Curriculum;31
7.4;IDEA 97 and Options for Alternate Assessment;35
7.5;2001 No Child Left Behind Act;37
7.6;2002--2003 Title I Regulations Permitting Alternate Achievement Standards in Accountability;38
7.7;IDEA 2004 and Assessments Measuring Responsiveness to Intervention;39
7.8;2007 Joint Title I IDEA Regulations Permitting Modified Academic Achievement Standards in Accountability;40
7.9;'Race To The Top' Assessment Initiatives;41
7.10;References;41
8;3 U.S. Legal Issues in Educational Testing of Special Populations;44
8.1;Introduction;44
8.2;Nonstandard Test Administrations;44
8.2.1;Federal Legislation;46
8.2.2;Section 504 of the Rehabilitation Act;46
8.2.3;Americans with Disabilities Act (ADA);47
8.2.4;Individuals with Disabilities Education Act (IDEA);47
8.2.5;Professional Standards;48
8.2.6;Terminology;49
8.2.7;Tension Between Accessibility and Construct Preservation/Score Comparability;49
8.2.8;Labeling Nonstandard Test Administrations;50
8.2.9;Skill Substitution;51
8.2.10;Construct Fragmentation;54
8.2.11;Construct Shift;54
8.2.12;Public Policy Exceptions;55
8.2.13;Leveling the Playing Field: Access Versus Success;55
8.2.14;Oregon Case Settlement;56
8.2.15;No Child Left Behind (NCLB) Modified Tests;57
8.2.16;Eligibility for Nonstandard Test Administrations;58
8.2.17;Undue Burdens;59
8.3;Graduation Testing Challenges;60
8.3.1;The Debra P. and GI Forum Cases;60
8.3.2;Notice;61
8.3.3;Curricular Validity;61
8.3.4;Retests and Remediation;61
8.3.5;Notice and Curricular Validity for Special Education Students;62
8.3.6;The Brookhart Case;62
8.3.7;The Ambach Case;63
8.3.8;Recent Challenges to Graduation Tests by Students with Disabilities;64
8.3.9;The Indiana Case;64
8.3.10;The California Case;65
8.3.11;ELL Challenges to Graduation Tests;67
8.3.12;The Texas ELL Case;68
8.3.13;The California ELL Case;69
8.4;Accountability Testing Challenges;70
8.4.1;No Child Left Behind (NCLB) Act;70
8.4.2;NCLB ELL Provisions;71
8.4.3;ELL ''Accommodations'' and Modifications;71
8.4.4;Majority and Minority ELLs;71
8.4.5;Construct Shift;72
8.4.6;California State Law Case (2000);72
8.4.7;NCLB Cases;73
8.4.8;The Reading School District Case;73
8.4.9;The Coachella Valley Case;73
8.5;Recommendations;74
8.6;Conclusion;77
8.7;References;77
9;4 IEP Team Decision-Making for More Inclusive Assessments: Policies, Percentages, and Personal Decisions;79
9.1;IEP Team Decision Making for More Inclusive Assessments;79
9.2;Historic Role of the IEP Team;82
9.3;IEP Team Decisions Regarding Participation in Statewide Assessment;82
9.4;State Guidelines for IEP Team Decision Making, 20072009;84
9.5;Recommendations to IEP Teams;84
9.6;Concluding Comments;90
9.7;References;90
10;5 Australian Policies to Support Inclusive Assessments;92
10.1;Introduction;92
10.2;The Australian Education Landscape;92
10.3;Australian Legislation Relevant to Students with a Disability;93
10.3.1;Disability Discrimination Act;93
10.3.2;Education Standards;94
10.4;Box 5.1. Standards for participation;95
10.5;Australian Policy Relevant to Students with a Disability;96
10.5.1;National Goals for Schooling in the Twenty-First Century;96
10.6;The National Assessment Program;97
10.6.1;Student Participation;99
10.6.2;Exempt Students;99
10.6.3;Special Provisions/Considerations;100
10.6.4;Issues Arising from Exemptions/Absences/Withdrawals;101
10.6.5;Issues Arising from Special Provisions/Considerations;102
10.7;Conclusion;103
10.8;References;104
11;Part II Classroom Connections;106
12;6 Access to What Should Be Taught and Will Be Tested: Students' Opportunity to Learn the Intended Curriculum;107
12.1;Student Access to the Intended Curriculum;108
12.2;The Intended Curriculum Model;108
12.2.1;The ICM for General Education;109
12.2.2;The ICM for Special Education;113
12.3;The Relevance of OTL;114
12.3.1;Conceptual Relevance;115
12.3.2;Substantive Relevance;115
12.4;Instructional Dimensions of OTL;116
12.5;Time on Instruction;116
12.6;Content of Instruction;118
12.7;Quality of Instruction;120
12.8;The Unfolding of Instruction;121
12.9;Measurement of OTL;122
12.10;Options for Measurement;126
12.11;The Future of OTL;132
12.12;References;133
13;7 Instructional Adaptations: Accommodations and Modifications That Support Accessible Instruction;138
13.1;Introduction;138
13.2;Instructional Adaptations: Important Distinctions;139
13.3;Distinguishing Between Instructional Adaptations and Differentiated Instruction;139
13.4;Distinguishing Between Instructional Accommodations and Modifications;140
13.4.1;Alignment of Instructional Adaptations with Grade-Level Content Standards;141
13.4.2;Consistency of Performance Expectations with General Education;142
13.5;Consequences of Implementing Instructional Adaptations;143
13.6;Consequences of Overuse of Instructional Adaptations;143
13.7;Consequences of Instructional Modifications;144
13.8;Interdependence Between Accessible Instruction and Accessible Assessments;144
13.9;Instructional Adaptations That Support Accessible Learning Environments;145
13.10;Instructional Accommodations;146
13.10.1;Changes in Presentation;146
13.10.2;Changes in Setting;148
13.10.3;Changes in Timing or Scheduling;148
13.10.4;Changes in Response Mode;149
13.11;Instructional Modifications;149
13.11.1;Changes in Presentation;149
13.11.2;Changes in Setting;150
13.11.3;Changes in Timing or Scheduling;150
13.11.4;Changes in Response Mode;151
13.12;Integrating Instructional Adaptations Based on Students Needs;151
13.13;Conclusions;151
13.14;References;152
14;8 Test-Taking Skills and Their Impact on Accessibility for All Students;154
14.1;Access and Test-Wiseness;154
14.2;Threshold Hypothesis;155
14.3;Frameworks and Findings;157
14.4;Test-Taking Skills and Other Methods of Increasing Access;160
14.4.1;Test-Taking Skills and Accommodations;160
14.4.2;Test-Taking Skills and Modifications;161
14.5;Computer-Based Test-Wiseness;161
14.5.1;Suggestions for Developers of Computer-Based Tests;162
14.5.2;A Lesson from Video Games;163
14.6;Practical Implications;164
14.6.1;Design Assessments to Minimize Test-Taking Skills;164
14.6.2;Evaluate the Influence of Test-Taking Strategies on a Case-By-Case Basis;164
14.6.3;Spend a Lot of Time Teaching Content and a Little Time Teaching Test-Taking Skills;164
14.6.4;Consider Interaction with Other Methods of Increasing Access to Tests;165
14.6.5;Teach for the Assessment;165
14.7;Conclusions;165
14.8;References;165
15;Part III Test Design and Innovative Practices;167
16;9 Accessibility Theory: Guiding the Science and Practice of Test Item Design with the Test-Taker in Mind;168
16.1;Universal Design: The End or the Beginning;168
16.2;Accessibility Theory: The Test-Taker, Not the Universe;169
16.3;Cognitive Load Theory: From Teaching to Testing;171
16.4;Developing Accessible Test Items: Identify, Quantify, and Modify;171
16.4.1;Item Stimulus;176
16.4.2;Item Stem;177
16.4.3;Visuals;177
16.4.4;Answer Choices;178
16.4.5;Page/Item Layout;178
16.4.6;Overall Accessibility Rating;179
16.5;Technical Evidence to Verify Intended Effects: The Accessibility Proof Paradox;182
16.6;Conclusion;183
16.6.1;Universal Design: The End or the Beginning?;183
16.6.2;Accessibility Theory: The Test-Taker, Not the Universe;183
16.6.3;Cognitive Load Theory: From Teaching to Testing;183
16.6.4;Developing Accessible Test Items: Identify, Quantify, and Modify;184
16.6.5;Technical Evidence to Verify Intended Effects: The Accessibility Proof Paradox;184
16.6.6;The Access Pathway: Accessibility Across the Educational Environment;184
16.7;References;185
17;10 Validity Evidence for Making Decisions About Accommodated and Modified Large-Scale Tests;188
17.1;Current Practice in Testing Approaches and Implications for Response Processes;190
17.2;Extended Time Research;193
17.2.1;Task Demands;193
17.2.2;Student Characteristics;193
17.3;Read Aloud Research;194
17.3.1;Task Demands;194
17.3.2;Student Characteristics;198
17.4;Research Designs and Quality of Research;198
17.4.1;Extended Time;199
17.4.2;Read Aloud Accommodations;200
17.5;Conclusions on Validity Evidence;201
17.6;Definitions of Constructs;202
17.7;Operationalization of Test Design;202
17.8;Training of Teachers;203
17.9;Organization of Systems (Research and Practice);203
17.10;Validation of Outcomes;203
17.11;References;203
18;11 Item-Writing Practice and Evidence;206
18.1;Four Thousand Years of Limited Accessibility;206
18.1.1;Access and Assessment as Policy Tools;207
18.2;MC Formats;207
18.3;CR Formats;208
18.4;Item-Writing Guidelines and Taxonomies;209
18.4.1;MC Item-Writing Guidelines;209
18.4.2;Constructed-Response Item-Writing Guidelines;211
18.4.3;Choosing the Item Format;213
18.5;Evidence;213
18.5.1;Item-Writing Guidelines with Empirical Evidence;213
18.5.2;Three Options Are Optimal;214
18.6;Research on Item Modifications for Accessibility;215
18.6.1;AA-AAS;215
18.6.2;AA-MAS;216
18.7;Innovations and Technological Enhancements;216
18.7.1;Alternative Scoring Models;218
18.8;Pulling It All Together;219
18.9;References;220
19;12 Language Issues in the Design of Accessible Items;222
19.1;The Concept of Linguistic Accessibility in Content-Based Assessments in English;222
19.2;The Nature and Impact of Language Factors on Content-Based Assessment Outcomes;223
19.3;Creating More Linguistically Accessible Assessments;224
19.4;Unnecessary Linguistic Complexity in Assessments;224
19.5;Linguistic Features That May Affect Accessibility of Assessments;224
19.6;Procedures for Linguistic Modification of Test Items;225
19.7;Linguistic Modification: Practical Implications;228
19.8;Research and Methodological Issues Related to the Linguistic Accessibility of Assessments;228
19.8.1;Research on the Effectiveness of Linguistic Modification Approach for Improving Accessibility of Assessments for ELLs;229
19.8.2;Research Findings on Language as a Source of Measurement Error in Assessments;230
19.8.3;Research Findings on the Impact of Linguistic Complexity on the Validity of Assessments for ELL Students;231
19.9;Practical Steps for Improving Assessments for ELL Students;231
19.9.1;Formative Assessments to Help Identify Language Issues in the Assessments of ELL Students;231
19.9.2;Accessible Assessments at the Classroom Level Versus Accessible Assessments at the State or National Level;232
19.9.3;Guidelines and Recommendations for Creating More Linguistically Accessible Assessments for Students;232
19.10;Summary and Recommendations;233
19.11;References;234
20;13 Effects of Modification Packages to Improve Test and Item Accessibility: Less Is More;236
20.1;The AA-MAS Policy;236
20.2;State-Modified Achievement Tests;237
20.2.1;Kansas Assessments of Multiple Measures;238
20.2.2;Louisiana Educational Assessment Program Alternate Assessment, Level 2;238
20.2.3;Texas Assessment of Knowledge and Skills -- Modified;240
20.3;Experimental Studies of Modifications;240
20.3.1;Consortium for Alternate Assessment Validity and Experimental Studies;241
20.3.2;Consortium for Modified Alternate Assessment Development and Implementation;243
20.3.3;Operationalizing Alternate Assessment for Science Inquiry Skills;243
20.4;What Do We Know About Modified Achievement Tests;244
20.5;Future Needs;245
20.6;Conclusions;245
20.7;References;245
21;14 Including Student Voices in the Design of More Inclusive Assessments;247
21.1;Epistemological and Methodological Frameworks for Including Student Voices;248
21.2;Uses of Student Response Data in the Standards for Testing;249
21.3;Ethics and Standards for Professional Practice;250
21.4;Existing Assessment Research That Integrates Student Voice;251
21.4.1;Research Using Student Drawings: Two Examples;252
21.4.2;Research Using Student Interviews: Four Examples;253
21.5;Incorporating Student Voices in Future Assessment Research;256
21.6;Conclusions;257
21.7;References;257
22;15 Computerized Tests Sensitive to Individual Needs;259
22.1;Background;259
22.2;Rethinking Test Accommodations;261
22.3;Categories of Accommodations;262
22.4;Audio Access to Knowledge Representations;264
22.5;Needs Met by an Audio Presentation;265
22.6;Signed Access to Knowledge Representation;267
22.7;Alternate Language Access to Knowledge Representation;270
22.8;Tactile Access to Knowledge Representation;270
22.9;Adapted Presentation of Item Content;271
22.10;Implications for Test Validity;273
22.11;The Future Is Today;274
22.12;References;276
23;16 The 6D Framework: A Validity Framework for Defining Proficient Performance and Setting Cut Scores for Accessible Tests;278
23.1;ALDs and Standard Setting Terminology;279
23.2;A Validity Framework for Standard Setting for All Standards-Based Assessments;279
23.3;Define;280
23.3.1;Identify the Examinee Population;280
23.3.2;Identify the Relationship to Unmodified Grade-Level Assessments;280
23.3.3;Identify Desired Rigor for ALDs;281
23.3.4;Identify Means for Increasing Accessibility;282
23.3.5;Identify Intended Uses of ALDs;282
23.3.6;Stakeholder Involvement;282
23.3.7;Validity Evidence;282
23.3.8;Process Planning;282
23.3.9;Panelists and Committee Members;283
23.3.10;Process Evaluations;283
23.4;Describe;283
23.4.1;Types of ALDs;283
23.4.2;Defining Proficiency;284
23.4.3;ALD-Writing Workshop;284
23.4.4;Method for Writing Range and Target ALDs;285
23.4.5;Validity Evidence;287
23.5;Design;287
23.5.1;AA-MAS Test Designs;287
23.5.2;AA-AAS Test Designs;287
23.5.3;Methodologies for Recommending Cut Scores;288
23.5.4;Methodology for Policy Review of Cut Scores;288
23.5.5;Validity Evidence;289
23.6;Deploy;289
23.6.1;Refining Target ALDs into Reporting ALDs;289
23.6.2;Validity Evidence;290
23.7;Deliver;290
23.7.1;Using Reporting ALDs;290
23.7.2;Validity Evidence;291
23.8;Deconstruct;291
23.8.1;Technical Report;291
23.8.2;Outside Perspective;292
23.8.3;Validity Evidence;292
23.9;Discussion;292
23.9.1;Implementing the 6D Framework;292
23.9.2;What to Do When Test Development Has Already Occurred;293
23.10;Conclusions;293
23.11;References;294
24;Part IV Conclusions;296
25;17 Implementing Modified Achievement Tests: Questions, Challenges, Pretending, and Potential Negative Consequences;297
25.1;A Brief History of NCLB, Accountability, and the AA-MAS;298
25.1.1;The AA-AAS;298
25.1.2;The AA-MAS;298
25.2;Pennsylvania's GSEG Project;299
25.3;GSEG Activities;300
25.3.1;Survey;300
25.3.2;Focus Group;304
25.3.3;Analysis of PSSA Performance Trends for Students in Special Education;305
25.4;Recommendations from the GSEG;307
25.5;Additional Considerations and Unintended Consequences;310
25.6;Commonly Provided Rationale for the AA-MAS;310
25.7;Unintended Consequences;315
25.8;Concluding Thoughts;317
25.9;References;317
26;18 Accessible Tests of Student Achievement: Access and Innovations for Excellence;320
26.1;What We Know;320
26.2;Policy and Regulations About Access;321
26.3;Classroom Instruction and Access to the General Curriculum;322
26.4;Test Design That Supports Access;324
26.5;Where Are We Going;326
26.5.1;Challenges to Access;327
26.5.2;Needed Innovations to Improve Access;327
26.6;Conclusion;329
27;Subject Index;330



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.