The assessments teachers use should guide them by providing a full picture of what students know and can do—i.e., where they are in their learning. What do students know relative to grade-level standards? What do they still need to learn in order to tackle on-grade level content?
Too often, assessment scores tell educators only a fraction of what they need to know in order to help all their students. This is because some assessments only provide normative (i.e., “norm”) data, or data that shows how students performed compared to their peers. That kind of information isn’t enough. It doesn’t tell educators what each individual student needs or where they are relative to grade-level learning. To get this information, educators also need criterion-based assessment data—data that measures student performance against predetermined criteria or grade-level standards.
Carlos is at the 50th percentile in math, but has he mastered grade-appropriate content? Jennifer has made a year’s worth of growth, but if she’s performing below grade level, how do we know she’s on track to reach proficiency? This normative information can be helpful, but we need more.
Ideally, students will have instructional goals that are obtainable, yet challenging enough to close the achievement gap. But if assessment scores don’t tell us where students are relative to grade level or specifically in what areas they need extra support, how can we help them achieve their goals?
In short, students need targeted instruction based on rich assessment data.
I’ll use an example to illustrate this point.
Mr. Richards is looking at the reading assessment data for his fifth-grade class. He sees that Emilio has a national percentile rank of 72 and that he is performing above most of his peers.
Great news, right? Unfortunately, it’s not that simple. When we show where Emilio and his classmates should be, Emilio’s results are less clear-cut.
While Emilio is performing better than many of his peers, he still has not met the standard for grade-level work. It’s not enough to see where Emilio stands in reference to his peers. Mr. Richards must also understand what Emilio knows, where he struggles, and how to create growth goals that will allow him to be successful in on-grade level work.
Is the assessment telling Mr. Richards enough about how far Emilio is from grade-level proficiency? What can he learn from the test questions Emilio missed or the domains in which he excelled? Would more information about the amount of growth Emilio needs motivate him more than simply knowing how he compares to his peers? We need the answers to these questions in order to create meaningful learning goals.
Normative versus Criterion
Norm-referenced scores don’t provide all the tools teachers need to set growth goals. Even more, dependency on norms can result in inequitable results for students, especially students from historically marginalized or underserved communities.
“Norm-referenced tests encourage teachers to view students in terms of a bell curve, which can lead them to lower academic expectations for certain groups of students, particularly special-needs students, English-language learners, or minority groups,” according to the Glossary of Education Reform.
And when academic expectations are consistently low year after year, students may never catch up to their peers nor master grade-level standards. But this is not inevitable. We know how to do better by our students.
Criterion-referenced scores measure students’ performance not by how they compare to other students’ performance on the same assessment. Although assessments that are grounded in norms can be valid and reliable, when used in isolation, they rarely produce instructionally relevant information. Specifically, a teacher may still be left wondering, “how can I best help my student?”
The Impact of COVID
Norms in fall 2020 have been particularly difficult to interpret due to the COVID-19 pandemic. Can we compare the 40th percentile this coming fall (2021) with previous years if student learning in 2020 was severely disrupted? Teachers are most likely going to find that understanding how a student is performing relative to their grade-level expectations is more instructionally useful than knowing a student is performing at the 40th percentile.
Mr. Richards needs to know where Emilio is struggling. What if Emilio’s assessment report contained domain information in addition to his overall performance? Consider the domain placement information from i‑Ready Diagnostic, Curriculum Associates’ standards-based adaptive assessment, below. Emilio is in the fifth grade, but his overall Reading score is at the Grade 4 criterion. He places at a third-grade level in some of the domain areas.
This information is invaluable for Mr. Richards because it tells him specifically where Emilio struggles in reading. Mr. Richards can personalize Emilio’s instruction so he gets more practice in Phonics and other areas of weakness.
Fall 2021: Criterion Data is More Essential Than Ever
Next fall, it will be imperative that Mr. Richards is able to use his assessment data to answer the following questions:
- What do my students know, and how are they performing in relation to grade-level expectations?
- How much growth does each student need to either reach or maintain grade-level proficiency?
- What can I do to support the delivery of both differentiated and grade-level instruction?
Criterion-referenced interpretations help us answer these questions. Although norm-referenced scores can be helpful, they usually only tell us half the story. The ideal assessment provides both norms and grade-level placements.
If our tests aren’t providing robust information about students and where they are with respect to grade-level criteria, then our tests aren’t doing all they can to help teachers and their students. We need the whole story—our students depend on it.
Learn more about the i-Ready Assessment suite and how it enables educators to connect data to instruction, create personalized learning paths for every student, and more.DISCOVER MORE
“Why Can’t the i-Ready Test Be Shorter?”—The Goldilocks Parable
What is the right length for a student assessment? The simple answer is: as long as is necessary for the test to provide the right data that is of sufficient quality to meet the wide variety of needs of teachers and administrators.READ BLOG POST
Student Dyslexia Screening Can Be Efficient, Effective, and Fair
Achieving reading proficiency by the end of third grade is an important landmark for all elementary students, but dyslexia stands in the way of many. This is a serious threat to the right of all American school children to be literate.READ BLOG POST
The Dos and Don’ts of Having Virtual Data Chats with Students and Families
Here, we share the dos and don’ts of having virtual data chats with students and families during remote learning. Many tips will be familiar to educators, including those about staying positive and preparing for each conversation.READ BLOG POST