March 18, 2014
Guidera was a panelist and resource expert on testing and assessments at The Hunt Institute’s 2014 Holshouser Legislators Retreat. (To learn more about this issue, see The Institute’s special re:VISION series on educator effectiveness here.) She is also the founder of Data Quality Campaign and leads the efforts to encourage policymakers to increase the availability and use of high-quality education data to improve student achievement. Below she shares how effective student assessments are crucial to improving student outcomes and educator effectiveness.
At the Data Quality Campaign (DQC), we’ve been talking for years about the great power data have to paint a full picture of a student’s learning. But when most people encounter the term education data, they still hear just one thing: test scores. We at DQC know that data are much more than test scores. Education data include other information from multiple sources that is used to support student learning and manage schools (for example, student and teacher attendance, services students receive, student academic development and growth, teacher preparation information, postsecondary success, remediation rates, and more). When the system is working effectively, it is getting information back to teachers and families in a timely manner and in formats that are actually meaningful and useful to them. When it’s not, we’re only collecting data for accountability.
Accountability is one important goal of education data use, but it is not fully tapping into the power of data. Only when we create a culture that supports the use of those test results (both “high-stakes” summative exams and formative tests) to explore important questions will we be able to get the results our kids deserve and our country needs. No one puts a dipstick into their engine and finds out that they are low on oil, blames the car mechanic (or themselves) for not filling it, does nothing about it, and then is surprised when the oil is even lower next time they check it. Yet that is how the vast majority of schools are using test results.
Yes, tests are an important piece of the data puzzle, but we need to have a conversation about how those tests are used. Many teachers and parents feel besieged by tests—they feel there are too many, the results are used to blame and shame them, and they limit the amount of learning that can happen in the classroom. These concerns need to be heard and addressed. But the major backlash against student testing is because teachers and families are getting little value out of it. If a test is to be worthwhile, it needs to be producing information that’s useful in classrooms and at kitchen tables.
When testing—both summative and formative—is working, it produces timely, useful information that educators can use to adjust their instruction and administrators to adjust curricula and the use of time, training, and talent to improve student achievement. Good tests can demonstrate what’s working and what’s not for teachers and kids. At a DQC event earlier this month, D.C. Public Schools teacher Jennifer George explained that she’s always relied on test data to improve her instruction, using them to pinpoint exactly how her students are doing and shine a light on what lessons were successful or not. She was able to use interim assessments and checks that incorporated observation, attendance, exit tests, and more to improve student outcomes. States and districts need to help teachers by providing time, opportunities for collaboration, and professional development around multiple types of assessments and their uses so that they’re not left to do all of this work by themselves. A great example of this leadership is in Georgia, where the state worked with teachers to determine what information they need at their fingertips and the best format to use. Teachers so appreciated clear, easy access to assessment information in the state dashboard that districts asked the state to upload their formative and interim assessment information to be viewed alongside what the state was already providing. This illustrates how assessments in context generate demand from educators.
We must move away from the outdated, ineffective culture of testing in which teachers instruct their students on a concept for a few weeks, test them, and then move on no matter what the scores are. Did your child get a C- in fractions? Oh well, maybe she’ll do better in decimals. This is not the best model for deep learning. Jennifer George is never surprised by how her students do on a summative assessment, and that’s because she uses data effectively—from tests and other sources—to make adjustments in real time. Without this important function, we’re just testing for testing’s sake, to generate scores for accountability.
What a waste. Instead of effective data use, we’ve created incentive systems around tests that cause people to be panicked and stressed and to attack the very tools that could potentially be the most useful to assist their efforts to help their kids. This is not the conversation we should be having. We should be using testing data to answer important questions. “Which of your students aren’t getting the concept and why? What needs to change to get better results? What help does my child need to address her identified academic gaps?” That’s the conversation we should be having. Data are no replacement for teacher and parent judgment, rather—when deployed effectively—they are important tools to inform that judgment by illuminating the current situation, faster and more accurately than the naked eye can.
To learn more about the 2014 HLR, see:
• RECAP: The 2014 Holshouser Legislators Retreat
• Holshouser Continues Legacy of Bi-Partisan Collaboration
• Resetting The Leadership Compass to Achieve Student Success
• Sensible Compensation Policies That Add Up
• The Hunt Institute’s Web site/events page for publications and videos
• Twitter hashtag #HLR2014
Join us at The Intersection by subscribing here.