April 15, 2021
As policymakers across the country grapple with how to administer 2021 assessments, The Hunt Institute and Data Quality Campaign have hosted a series of webinars highlighting different aspects of measuring student growth this year. During these webinars, we’ve received many questions—and both organizations have teamed up with SAS to answer them in a series of blog posts. Read more on #EdData in the time of COVID-19 in our first blog post here and in our second conversation below.
The Hunt Institute: As the U.S. Department of Education continues to sort out the approach to 2021 summative assessment guidance and waivers, it is clear some states are moving forward with the flexibilities they have been given to extend testing windows or shorten assessments. Other states, who have yet to decide, have some considerations to work through before making their decision on which approach to take.
SAS: The benefit of delaying assessment to the fall, is that it allows for more time for students to “catch up;” and provides an opportunity to have testing fully in-person during a more normal schooling experience. The drawback to this approach is that it may conflate summer learning loss with the learning loss experienced during the pandemic.
Many states and districts have continued to assess students through benchmark and formative assessments to inform daily practice in the classroom. But there will be significant additional value in statewide summative assessments to truly understand the pandemic’s impact on teaching and learning and on student’s progress towards meeting their academic goals.
Data Quality Campaign: Extending assessment windows will also impact the timing of results—which may interfere with plans to use assessment data to target supports over the summer or plans for instruction in 2021-22. While most states will still administer tests this spring, Maryland and Pennsylvania will let districts push assessments into the fall.
SAS: The benefit of shortening assessments is that it takes less time and makes more time available for instruction, and it could be less stressful on teachers and students. The drawback to this approach is that to measure student growth and learning loss, the assessment must meet certain scaling requirements to ensure that it is appropriate for this purpose. More specifically, there must be sufficient stretch in the scales to differentiate the performance of both high-achieving and low-achieving students. If a test is too short, a floor or ceiling in the scales could bias results for districts and schools serving either low-achieving or high-achieving students. This criterion typically is met when there is a sufficient number of items on a specific assessment or when the assessment is a computer adaptive assessment. There is not an exact number of test items to meet this threshold because it depends in part on the item difficulty across all items. However, 40 – 50 items on a non-computer adaptive assessment typically works well.
Data Quality Campaign: Measuring student growth this year requires assessment data that is comparable to 2019 assessments; as such, state leaders should consider how any changes to the scope or length of tests will impact their ability to compare the results to years prior. Massachusetts recently announced plans to cut assessment time fully in half. Other states like New York have cancelled other assessments not required by federal law.
The Hunt Institute: First and foremost, states must consider why they are testing and how they intend to use the data they receive. Undoubtedly, the most compelling reason to test students this year is to get an accurate assessment of learning loss and develop a thoughtful and targeted plan that can begin to repair the damage done. In that case, testing sooner is best so long as it can be done well. Similarly, policymakers should prioritize an approach that accounts for the diversity of students and their experiences this year.